Single Source for Borehole Data
Vast and expanding quantities of borehole data are acquired over time in disparate formats, distributed, loaded and duplicated repeatedly among different databases. Information may be incomplete, inaccurate or uncertain, and access to critical project data too slow for today’s fast-paced decisions. Recall™ Borehole data management software is the industry’s leading solution for efficiently storing, managing, and publishing raw and edited borehole data in one integrated system.
Reduce cycle time and data redundancy
Recall software consists of two databases—one for edited data and one for raw data in original format—functioning as a single, integrated system. Data is loaded and quality controlled once. Standardized naming conventions and common queries apply across both databases, reducing cycle time and redundancies due to multiple copies.
Rapidly evaluate data integrity and fix problems
Recall Borehole software can automatically load data from tapes and image files, collect metadata, and perform data verification. Specialized rules-based tools can verify data completeness, search for anomalies, and identify issues with metadata, data ranges, or values. Data managers can quickly appraise data integrity and fix problems early. Results from the data QC process are stored with the data allowing end users and data managers to have confidence in the quality of the data they use.
Improve access to high-value borehole data
The Recall Live database stores only high-value, "standard" data that asset teams use daily for interpretation and reservoir characterization. Optimized for access speed and bulk storage efficiency, generalists can easily browse, locate and export any type of edited borehole data sampled in depth or time—e.g. logs, curves, and images.
Safely archive raw borehole data in original format
The Recall Original Format Digital Well Archive provides safe, long-term storage of raw data in original acquired format, both structured and unstructured. Validated upon registration and archived online, near-line or offline, original data can be automatically extracted at any level of granularity for additional processing, validating interpretations, and minimizing risk.