Towards automatic and accurate core-log processing

Journal of Applied Geophysics(2023)

引用 0|浏览5
暂无评分
摘要
The analysis of rocks plays an important role in geological and petroleum-engineering problems. In these tasks, core (i.e., core-log) is a crucial element since it provides underlying information on the geophysical properties of the area. Thereby, works often leverage core data to assign the correct category from well-logs. Unfortunately, processing core is laborious and time-consuming; hence, its analysis takes long periods. For example, for a single well, a human could have to adjust thousands of core data. Besides, due to its nature, log analysis must handle noise and missing data. Given these issues, our goal is to propose an automatic (e.g., without any human intervention) and accurate core processing using the gamma-ray. To achieve this goal and additionally demonstrate the most promising pattern recognition strategies, we assess the effectiveness of several models to diagnostic the core-log. Such models include simple regressions, ensembles, gradient boosting, recurrence models and the relatively recent Transformer network. Our comprehensive evaluation is different from existing works on geoscience tasks, which evaluate a small number of models or variations of a single model. Particularly, we evaluate more than 200 unique models (i.e., models with different hyperparameters) and observe that deep learning techniques outperform other techniques by a large margin. Furthermore, we compare the compromise between predictive ability and computational cost of several models – a reliable information for real-time lithology, which is often overlooked by previous works. According to our results, we can effectively replace the manual (i.e., by a human expert – geologist) diagnostic of core by pattern recognition methods, mainly by the Transformer model, as it aligns the cores in accordance (i.e., in the same direction) with the geologist. More specifically, given the core and gamma-ray as input, the Transformer model outputs an adjusted core obtaining an R2 of 93,63, which indicates a fine-grained adjustment. We empirically demonstrate that the success behind Transform is its effectiveness in expressing large sequences of core and well-log. On the other hand, other models such as RNN, LSTM and GRU meet collapse when processing large sequences of core-log. We further confirm that Transformers are among the top-performance models and often surpass recurrence models on several geoscience applications such as lithology and log-shape classification, and prediction of oil production. Regarding the first task, the models successfully classify different facies categories such as coarse sandstone, medium sandstone, fine sandstone, siltstone, dolomite, limestone and mudstone. In these tasks, Transformer outperforms the widely-employed LSTM by up to 14 percentage points. Our empirical observations encourage the exploration of multiple models and suggest Transformers as strong baselines for future research on geoscience tasks. In this direction, we release all data and trained models used throughout the work. To the best of our knowledge, we are the first study exploring Transformer models on geoscience applications.
更多
查看译文
关键词
Core-log processing,Machine learning,Recurrence models,Transformer network
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要