Last resort data recovery


04.09.2023
by Kamil Stawiarski

Shit happens.

We had recently another customer who can relate to that absolute truth – after a serious storage problem, they managed to restore only data files from one mount point. In theory they were lucky because it was mount point with actual data tablespaces… but they didn’t have any SYSTEM datafiles.

It’s like having all pages of books from a library but without any page numbers and book covers plus the books are in language you don’t understand – you have the data but data is useless.

What can you do in such situations? Well… not much. But at least you could try to extract the data into CSV files and try to match data to tables.

In order to do this, you could use rico2 code to create a software for extracting data from database blocks.

I have created an example of such tool in Rust – rico3. In this tool, oracle_decooder.rs is responsible for guessing simple data types based on their raw hex values.

Let’s see how does it work – first we have to create a json file with parameters:

1{
2    "action":     "consolidate objects",
3    "workdir":    "/tmp/rico3",
4    "data_files": ["users.dbf"]
5}

Now we can run the first step:

1inter@piclerick rico3 % ./target/release/rico3 -p params.json
2Processing file users.dbf
3Starting worker 0
4Starting worker 1
5Stopping worker 0
6Stopping worker 1
7inter@piclerick rico3 % ls /tmp/rico3
873297.dat  73298.dat  73299.dat  73300.dat  73301.dat  73302.dat  73454.dat  73456.dat  73460.dat

The result of this first operation is a set of files with data object id of extracted objects – those are files with just pure raw tables blocks.

Let’s prepare another json parameter file to extract raw data from those files:

1{
2    "action":     "extract data from file",
3    "workdir":    "/tmp/rico3",
4    "data_files": ["73297.dat","73298.dat","73299.dat","73300.dat","73301.dat","73302.dat","73454.dat","73456.dat","73460.dat"]
5}

Now we can run the extract:

1inter@piclerick rico3 % ./target/release/rico3 -p params_extract.json
2Processing file 73297.dat
3Starting worker 0
4Starting worker 1
5Stopping worker 1
6Stopping worker 0
7Processing file 73298.dat
8Starting worker 0
9Starting worker 1
10Stopping worker 1
11Stopping worker 0
12Processing file 73299.dat
13Starting worker 0
14Starting worker 1
15Stopping worker 1
16Stopping worker 0
17Processing file 73300.dat
18Starting worker 0
19Starting worker 1
20Stopping worker 1
21Stopping worker 0
22Processing file 73301.dat
23Starting worker 0
24Starting worker 1
25Stopping worker 0
26Stopping worker 1
27Processing file 73302.dat
28Starting worker 0
29Starting worker 1
30Stopping worker 1
31Stopping worker 0
32Processing file 73454.dat
33Starting worker 0
34Starting worker 1
35Stopping worker 1
36Stopping worker 0
37Processing file 73456.dat
38Starting worker 0
39Starting worker 1
40Stopping worker 1
41Stopping worker 0
42Processing file 73460.dat
43Starting worker 0
44Starting worker 1
45Stopping worker 0
46Stopping worker 1

After this step you should have a set of CSV files with extracted data.

1inter@piclerick rico3 % tail 73298.csv
2|197.0000|Kevin|Feeney|KFEENEY|650.507.9822|2006-05-23 00:00:00|SH_CLERK|6000.00|NULL|124.0000|50.00
3|198.0000|Donald|OConnell|DOCONNEL|650.507.9833|2007-06-21 00:00:00|SH_CLERK|5200.00|NULL|124.0000|50.00
4|199.0000|Douglas|Grant|DGRANT|650.507.9844|2008-01-13 00:00:00|SH_CLERK|5200.00|NULL|124.0000|50.00
5|200.00|Jennifer|Whalen|JWHALEN|515.123.4444|2003-09-17 00:00:00|AD_ASST|8800.00|NULL|101.0000|10.00
6|201.0000|Michael|Hartstein|MHARTSTE|515.123.5555|2004-02-17 00:00:00|MK_MAN|26000.0000|NULL|100.00|20.00
7|202.0000|Pat|Fay|PFAY|603.123.6666|2005-08-17 00:00:00|MK_REP|12000.0000|NULL|201.0000|20.00
8|203.0000|Susan|Mavris|SMAVRIS|515.123.7777|2002-06-07 00:00:00|HR_REP|13000.0000|NULL|101.0000|40.00
9|204.0000|Hermann|Baer|HBAER|515.123.8888|2002-06-07 00:00:00|PR_REP|20000.00|NULL|101.0000|70.00
10|205.0000|Shelley|Higgins|SHIGGINS|515.123.8080|2002-06-07 00:00:00|AC_MGR|24016.000000|NULL|101.0000|110.0000
11|206.0000|William|Gietz|WGIETZ|515.123.8181|2002-06-07 00:00:00|AC_ACCOUNT|16600.0000|NULL|205.0000|110.0000

The next step would be to match the CSV files with database objects and try to load the data into the fresh database.

Of course this is just a sample tool and it requires a lot more work to be able to support more Oracle data types and situations like block chaining and migration, but keep in mind that situation is usually not that hopeless as you would expect! At least once a year we are restoring a database without a proper backup for customers in "hopeless" situations 🙂


Contact us

Database Whisperers sp. z o. o. sp. k.
al. Jerozolimskie 200, 3rd floor, room 342
02-486 Warszawa
NIP: 5272744987
REGON:362524978
+48 508 943 051
+48 661 966 009
info@ora-600.pl

Newsletter Sign up to be updated