Shit happens.
We had recently another customer who can relate to that absolute truth – after a serious storage problem, they managed to restore only data files from one mount point. In theory they were lucky because it was mount point with actual data tablespaces… but they didn’t have any SYSTEM datafiles.
It’s like having all pages of books from a library but without any page numbers and book covers plus the books are in language you don’t understand – you have the data but data is useless.
What can you do in such situations? Well… not much. But at least you could try to extract the data into CSV files and try to match data to tables.
In order to do this, you could use rico2 code to create a software for extracting data from database blocks.
I have created an example of such tool in Rust – rico3. In this tool, oracle_decooder.rs is responsible for guessing simple data types based on their raw hex values.
Let’s see how does it work – first we have to create a json file with parameters:
{
"action": "consolidate objects",
"workdir": "/tmp/rico3",
"data_files": ["users.dbf"]
}
Now we can run the first step:
inter@piclerick rico3 % ./target/release/rico3 -p params.json
Processing file users.dbf
Starting worker 0
Starting worker 1
Stopping worker 0
Stopping worker 1
inter@piclerick rico3 % ls /tmp/rico3
73297.dat 73298.dat 73299.dat 73300.dat 73301.dat 73302.dat 73454.dat 73456.dat 73460.dat
The result of this first operation is a set of files with data object id of extracted objects – those are files with just pure raw tables blocks.
Let’s prepare another json parameter file to extract raw data from those files:
{
"action": "extract data from file",
"workdir": "/tmp/rico3",
"data_files": ["73297.dat","73298.dat","73299.dat","73300.dat","73301.dat","73302.dat","73454.dat","73456.dat","73460.dat"]
}
Now we can run the extract:
inter@piclerick rico3 % ./target/release/rico3 -p params_extract.json
Processing file 73297.dat
Starting worker 0
Starting worker 1
Stopping worker 1
Stopping worker 0
Processing file 73298.dat
Starting worker 0
Starting worker 1
Stopping worker 1
Stopping worker 0
Processing file 73299.dat
Starting worker 0
Starting worker 1
Stopping worker 1
Stopping worker 0
Processing file 73300.dat
Starting worker 0
Starting worker 1
Stopping worker 1
Stopping worker 0
Processing file 73301.dat
Starting worker 0
Starting worker 1
Stopping worker 0
Stopping worker 1
Processing file 73302.dat
Starting worker 0
Starting worker 1
Stopping worker 1
Stopping worker 0
Processing file 73454.dat
Starting worker 0
Starting worker 1
Stopping worker 1
Stopping worker 0
Processing file 73456.dat
Starting worker 0
Starting worker 1
Stopping worker 1
Stopping worker 0
Processing file 73460.dat
Starting worker 0
Starting worker 1
Stopping worker 0
Stopping worker 1
After this step you should have a set of CSV files with extracted data.
inter@piclerick rico3 % tail 73298.csv
|197.0000|Kevin|Feeney|KFEENEY|650.507.9822|2006-05-23 00:00:00|SH_CLERK|6000.00|NULL|124.0000|50.00
|198.0000|Donald|OConnell|DOCONNEL|650.507.9833|2007-06-21 00:00:00|SH_CLERK|5200.00|NULL|124.0000|50.00
|199.0000|Douglas|Grant|DGRANT|650.507.9844|2008-01-13 00:00:00|SH_CLERK|5200.00|NULL|124.0000|50.00
|200.00|Jennifer|Whalen|JWHALEN|515.123.4444|2003-09-17 00:00:00|AD_ASST|8800.00|NULL|101.0000|10.00
|201.0000|Michael|Hartstein|MHARTSTE|515.123.5555|2004-02-17 00:00:00|MK_MAN|26000.0000|NULL|100.00|20.00
|202.0000|Pat|Fay|PFAY|603.123.6666|2005-08-17 00:00:00|MK_REP|12000.0000|NULL|201.0000|20.00
|203.0000|Susan|Mavris|SMAVRIS|515.123.7777|2002-06-07 00:00:00|HR_REP|13000.0000|NULL|101.0000|40.00
|204.0000|Hermann|Baer|HBAER|515.123.8888|2002-06-07 00:00:00|PR_REP|20000.00|NULL|101.0000|70.00
|205.0000|Shelley|Higgins|SHIGGINS|515.123.8080|2002-06-07 00:00:00|AC_MGR|24016.000000|NULL|101.0000|110.0000
|206.0000|William|Gietz|WGIETZ|515.123.8181|2002-06-07 00:00:00|AC_ACCOUNT|16600.0000|NULL|205.0000|110.0000
The next step would be to match the CSV files with database objects and try to load the data into the fresh database.
Of course this is just a sample tool and it requires a lot more work to be able to support more Oracle data types and situations like block chaining and migration, but keep in mind that situation is usually not that hopeless as you would expect! At least once a year we are restoring a database without a proper backup for customers in "hopeless" situations 🙂