A generic method for acoustic processing using deep learning

Kisra, Saad; Khadhraoui, Bassem; Sable, Sebastien; Lam, Lu Duc Duong
ADIPEC 2019, Abu Dhabi International Petroleum Exhibition & Conference, 11-14 November, Abu Dhabi, UAE

A new method based on deep learning enables the extraction of formation compressional and shear slownesses from raw waveforms acquired by an acoustic tool regardless of its conveyance system or of its hardware configuration (number of axial receivers, waveform sampling rate, or number of time samples). The proposed approach is very fast, fully automated, and suitable for real-time processing workflows at the wellsite.

Over the years, a large collection of acoustic waveforms has been recorded and processed by experts in a variety of environments. In the proposed method, we apply a convolutional neural network (also known as ConvNet or CNN) to learn from previously processed data to estimate acoustic slownesses from raw waveforms. Because we use an algorithm that is originally designed for visual recognition, we transform the raw waveforms into images with enhanced characteristics that are directly associated with the acoustic slownesses that we are aiming to predict. For monopole waveforms, we were able to improve the prediction results using a short-term average/long-term average (STA/LTA) technique that enhances the main arrivals. We then train a CNN model with both the input images and the expected outputs (i.e., slowness values) on a large variety of data covering the main rock environments of interest. The CNN-trained model is subsequently used to estimate the slowness values from unprocessed waveforms never seen by the CNN model.

To test our method on real data, we gathered a collection of acoustic waveforms recorded by several acoustic tools in 20 wells, drilled in different fields across the world. The wells were drilled with different bit sizes (varying from 6 to 17.5 in.), and the compressional slownesses were ranging from 50 to 165 μs/ft and shear slowness ranging from 80 to 600 μs/ft. In total, we used 96,011 data points where each data point consists of a pair of a waveform array and the associated slowness value calculated by an acoustics expert. We then applied the CNN-trained model to a set of waveforms from validation wells, i.e. wells that were not part of the training dataset.

In these wells, previously unseen by the ConvNet model, the average absolute error between the slowness estimated using the CNN-trained model and the slowness calculated by an expert was less than 3 μs/ft, which is comparable to the error associated with state-of-the-art processing techniques for slowness estimation. We also discuss how our method can be extended to estimate shear slowness from dipole data using the same ConvNet. The deep-learning-based technique for slowness estimation that we describe can run extremely fast after the ConvNet training process is completed and provides good slowness results without prior information about the tool configuration or the environment in which waveforms are recorded. Because our technique is fully automated, it can also be used as an automatic quality control (QC) flag for wellsite processing and real-time operations.


DOI
Type:
Conference
City:
Abu Dhabi
Date:
2019-11-11
Department:
Digital Security
Eurecom Ref:
6115

PERMALINK : https://www.eurecom.fr/publication/6115