How to recover a dropped package from archivelogs – based on research for VOODOO engine


28.05.2017
by Kamil Stawiarski

Sorry for long time without post, but we are writing with Marcin Rydz a new heterogenous replication product based on archivelogs and it’s consuming a looooooot of our time 🙂

Yes I know it’s nothing new on the market. There are other cool products and I know they’ll be probably be better. But we want our own! Something simple. And maybe we’ll publish it as an Open Source? Don’t know yet, but we’re thinking about it 🙂

It’s called VOODOO and the simplest description of it would be:
"We all want to have a bar and drive a Porshe… But in most of the cases we can afford only a few pints and drive a VW. Replication engines are expensive and buying something like Golden Gate when you only have SE database and you want to replicate it to PostgreSQL is like buying a single malt to make Cola taste more bearable. That’s why we wrote our own replicator tool. A simple one. A cheap one. Nothing fancy. But it works."

OK. So you remember that I’ve showed you how to use RICO, to recover a dropped package in THIS article.

Now we will do it based on archivelogs. And we will use Python to do it. Happy?

What we will need is Python 2.6.6 or newer and a wheel called bitstring.

So we already know, that all PL/SQL code is stored in a table SOURCE$.

SQL> desc source$
 Name					   Null?    Type
 ----------------------------------------- -------- ----------------------------
 OBJ#					   NOT NULL NUMBER
 LINE					   NOT NULL NUMBER
 SOURCE 					    VARCHAR2(4000)

SQL> select object_id, data_object_id
  2  from dba_objects
  3  where object_name='SOURCE$';

 OBJECT_ID DATA_OBJECT_ID
---------- --------------
       301	      301

All operations like "CREATE OR REPLACE PACKAGE" are simple combination of deletes and inserts to this table. Knowing that, we can search archivelogs for those operations with object_id = 301 and data_object_id = 301.

Sound simple, isn’t it? The biggest problem starts when we want to analyse the internal structure of REDO RECORD.

Fortunately, people smarter than me wrote some good stuff about it – you can check for example this set of articles by Julian Dyke: http://www.juliandyke.com/Internals/Redo/Redo.php

A good idea would be also to buy a book written by Jonathan Lewis: http://www.apress.com/us/book/9781430239543

So to summarise: all operations in REDO are divided into change vectors and each vector has its own operation code. We will be interested in operation code "5.1″ because this is UNDO information stored in redo logs – and we will need it to restore the previous content of the SOURCE$ table.

Unfortunately this operation code is really complicated and the script, that you will see, is really just a simplification, made specially for 12c database and operations on SOURCE$ table. (You will be able to checkout more cool internal stuff when we finish VOODOO and if we decide to publish it as an open source)

Before we begin I want you to understand, that this won’t be a ready tool for recovery in every scenario. The script will just show you the general idea and you’ll have to adjust it to your needs and circumstances.

Remember that you can also use DBMS_LGMNR package to do similar job and use my scirpts only to find appropriate archive logs to parse with Log Miner… but with this method you will need a powerful database account.

So let’s start the fun 🙂

First packages, which will be using:

 import sys,math,os,binascii,struct from bitstring import ConstBitStream 
  • sys – for gathering input parameters to our script
  • math – for math stuff like floor 😉
  • os – for looking through directories, subdirectories and files
  • binascii – for searching for a pattern inside a file
  • struct – for dealing with C structures

The first function is simple:

objids={}
objids["SOURCE$"]=[301,301]

def checkForObjd(filePath,objIds,startOffset):
    s_objIds = struct.Struct("II")
    f_testFile = ConstBitStream(filename=filePath)
    for tableName in objIds:
        b_objIds = s_objIds.pack(objIds[tableName][0],objIds[tableName][1])
        c_objIds = "0x" + binascii.hexlify(b_objIds)
        foundBlocks = f_testFile.findall(c_objIds,(startOffset+1)*512*8,bytealigned=True)
        yield foundBlocks

This function is responsible for finding all byte positions of object_id and data_object_id together in a file. OBJECT_ID and DATA_OBJECT_ID are stored as two unsigned integer structures, one after another in a vector with operation code "5.1″.

After finding a byte position of those structures, we will be able to use it as a starting point for looking for our data.

def restoreCode(file_name,byte_pos):
        redolog = open(file_name,"r")

        curr_block = math.floor(byte_pos/512)
        back_offset = (byte_pos-30) % 512
        if curr_block != math.floor((byte_pos-30)/512) or (back_offset>=0 and back_offset<=16):
                file_header_mod = 16
        else:
                file_header_mod = 0
        redolog.seek(byte_pos-30-file_header_mod)

        column_sizes = redolog.read(6)
        obj_size = struct.Struct("HHH").unpack(column_sizes)[0]
        line_size = struct.Struct("HHH").unpack(column_sizes)[1]
        source_size = struct.Struct("HHH").unpack(column_sizes)[2]

This is the starting point of our function. After opening the file, we have to check the block number (redo block number is usually 512 bytes long).

LGWR is writing data sequentially, which means that redo record can be splitted across two different blocks. 30 bytes before OBJECT_ID we can find an array with column lengths but we have to check if this is in the same block or in the previous one or maybe in the same, but at the block header level which has always 16 bytes.

After checking this we can read column sizes for SOURCE$ table.

skip_size = 0

        if obj_size % 4 == 0:
                skip_size = obj_size
        else:
                skip_size = obj_size + (4 - obj_size % 4)

        if line_size % 4 == 0:
                skip_size += line_size
        else:
                skip_size += line_size + (4 - line_size % 4)

We will need only last column from the table, so we will skip first two. But to calculate the sip_size, we have check if values are filling 4byte structures. All data in redo is stored in 4byte structures.

redolog.seek(byte_pos+24)
        op_code_byte = redolog.read(1)
        op_code = struct.Struct("B").unpack(op_code_byte)[0]

        if op_code == 2:
                op_code_mod = 92
        else:
                op_code_mod = 108

24 bytes from OBJECT_ID we can find something called op_code – in Oracle archivelog dump it looks like this:

KTB Redo
op: 0x02  ver: 0x01
compat bit: 4 (post-11) padding: 1
op: C  uba: 0x010015a6.02e7.02

Depending on the op_code value we will know how far in the block the data starts. Probably we should check also if 24 bytes forward is not in the next block or at the block header level and add 16B if needed, but I think that some structures have to be undivided… or at least I hope so 😉 But we didn’t make this kind of assumptions in VOODOO 🙂

back_offset = (byte_pos+op_code_mod+skip_size) % 512
        if curr_block != math.floor((byte_pos+op_code_mod+skip_size)/512) or (back_offset>=0 and back_offset<=16):
                file_header_mod = 16
        else:
                file_header_mod = 0

        data_pos = byte_pos+op_code_mod+skip_size+file_header_mod
        redolog.seek(data_pos)
        curr_block = math.floor(data_pos/512)
        if math.floor((redolog.tell()+source_size) / 512) == curr_block:
                line_byte = redolog.read(source_size)
                line_text = struct.Struct(str(source_size) + "s").unpack(line_byte)[0]
        else:
                diff = (curr_block+1)*512 - data_pos -1
                line_text1_byte = redolog.read(int(diff))
                redolog.seek((int(curr_block)+1)*512+16)
                rest_of_line = source_size-diff
                line_text2_byte = redolog.read(int(rest_of_line))
                line_text1 = struct.Struct(str(int(diff)) + "s").unpack(line_text1_byte)[0]
                line_text2 = struct.Struct(str(int(rest_of_line)) + "s").unpack(line_text2_byte)[0]
                line_text = line_text1 + line_text2
        print line_text,


        redolog.close()

After finding data position we can read it and print the line of code 🙂 Of course here we have to check if we won’t go across another block.

for path, subdirs, files in os.walk(sys.argv[1]):
        for filename in sorted(files):
                fname = os.path.join(path,filename)
                found = checkForObjd(fname,objids,0)
                for i in found:
                        found_elements = list(i)
                        if found_elements:
                                for j in found_elements:
                                        byte_number = j/8
                                        restoreCode(fname,byte_number)

The final part of the code is just going through redo blocks in archivelogs. So the whole script looks like this:

import sys,math,os,binascii,struct
from bitstring import ConstBitStream

objids={}
objids["SOURCE$"]=[301,301]

def checkForObjd(filePath,objIds,startOffset):
    s_objIds = struct.Struct("II")
    f_testFile = ConstBitStream(filename=filePath)
    for tableName in objIds:
        b_objIds = s_objIds.pack(objIds[tableName][0],objIds[tableName][1])
        c_objIds = "0x" + binascii.hexlify(b_objIds)
        foundBlocks = f_testFile.findall(c_objIds,(startOffset+1)*512*8,bytealigned=True)
        yield foundBlocks

def restoreCode(file_name,byte_pos):
	redolog = open(file_name,"r")

	curr_block = math.floor(byte_pos/512)
	back_offset = (byte_pos-30) % 512
	if curr_block != math.floor((byte_pos-30)/512) or (back_offset>=0 and back_offset<=16):
		file_header_mod = 16
	else:
		file_header_mod = 0
	redolog.seek(byte_pos-30-file_header_mod)

	column_sizes = redolog.read(6)
	obj_size = struct.Struct("HHH").unpack(column_sizes)[0]
	line_size = struct.Struct("HHH").unpack(column_sizes)[1]
	source_size = struct.Struct("HHH").unpack(column_sizes)[2]

	skip_size = 0

	if obj_size % 4 == 0:
		skip_size = obj_size
	else:
		skip_size = obj_size + (4 - obj_size % 4)

	if line_size % 4 == 0:
		skip_size += line_size
	else:
		skip_size += line_size + (4 - line_size % 4)

	redolog.seek(byte_pos+24)
	op_code_byte = redolog.read(1)
	op_code = struct.Struct("B").unpack(op_code_byte)[0]

	if op_code == 2:
		op_code_mod = 92
	else:
		op_code_mod = 108

	back_offset = (byte_pos+op_code_mod+skip_size) % 512
	if curr_block != math.floor((byte_pos+op_code_mod+skip_size)/512) or (back_offset>=0 and back_offset<=16):
		file_header_mod = 16
	else:
		file_header_mod = 0

	data_pos = byte_pos+op_code_mod+skip_size+file_header_mod
	redolog.seek(data_pos)
	curr_block = math.floor(data_pos/512)
	if math.floor((redolog.tell()+source_size) / 512) == curr_block:
		line_byte = redolog.read(source_size)
		line_text = struct.Struct(str(source_size) + "s").unpack(line_byte)[0]
	else:
		diff = (curr_block+1)*512 - data_pos -1
		line_text1_byte = redolog.read(int(diff))
		redolog.seek((int(curr_block)+1)*512+16)
		rest_of_line = source_size-diff
		line_text2_byte = redolog.read(int(rest_of_line))
		line_text1 = struct.Struct(str(int(diff)) + "s").unpack(line_text1_byte)[0]
		line_text2 = struct.Struct(str(int(rest_of_line)) + "s").unpack(line_text2_byte)[0]
		line_text = line_text1 + line_text2
	print line_text,


	redolog.close()

for path, subdirs, files in os.walk(sys.argv[1]):
	for filename in sorted(files):
		fname = os.path.join(path,filename)
		found = checkForObjd(fname,objids,0)
		for i in found:
			found_elements = list(i)
			if found_elements:
				for j in found_elements:
					byte_number = j/8
					restoreCode(fname,byte_number)

It’s not very elegant but mind that it’s only an illustration of a principal.

And how it works? Let’s replace a function with some other useless code:

SQL> ed
Wrote file afiedt.buf

  1  create or replace procedure p_test_bulk(p_limit number) is
  2  begin
  3    dbms_output.put_line('dupa');
  4* end;
SQL> /

Procedure created.

We know where our archivelogs are stored and we can use our simple script like this:

[oracle@rico recover_source]$ python recover_source.py /u01/app/oracle/fast_recovery_area/SKIPER/archivelog/2017_05_28/
 procedure p_test_bulk(p_limit number) is
  cursor c_emps is
    select *
    from employees;
  type t_emps is table of c_emps%ROWTYPE index by pls_integer;
  v_emps t_emps;
begin
  open c_emps;
  loop
     exit when cemps%NOTFOUND;

    for i in v_emps.first..v_emps.last loop
      dbms_output.put_line('TEST2');
    end loop;
  end loop;
  close c_emps;
zAd;

That’s it 🙂 For some reason the last "END;" is always lost but this is not a big problem.
I hope it will by helpful. Stay tuned for more REDO articles, based on our research for writing VOODOO!

Search:


Kontakt

Database Whisperers sp. z o. o. sp. k.
Al. Jerozolimskie 200
Wejście B, III piętro/ pokój 342
02-486 Warszawa
NIP: 5272744987
REGON:362524978
+48 508 943 051
+48 661 966 009
info@ora-600.pl

Newsletter zapisz się żeby być na bieżąco