Total Pageviews

Saturday 12 March 2011

Seismic Data Interpretation



Introduction
A background is very much required to understand the problems in acquisition and seismic data processing. Efforts are being made to image, a more accurate subsurface earth, but unless we have subsurface data tied with the surface data, no one can certain about the presence of Hydrocarbon. The more subsurface(well) data we gather, the more accurate we arrive at our interpretation results.

No survey is done blindly , and there should be enough support of previous data(gravity and magnetic data interpreted or old 2D data), to locate the region where seismic data should be acquired to carry detailed investigation.

Land seismic is more time consuming than marine data. The volume of data acquired in marine seismic is huge with respect land seismic. Marine Data is easy to acquire and also easy to process, as has no elevation correction to be applied as of land data (weathering correction) and the noise level is also very very low. 

Passive seismic low frequency
Low Frequency (LF) seismic anomalies associated with hydrocarbon reservoirs have been observed by various industry and academic groups. 
Data is acquired using well known, standard equipment. These broadband seismometers nominally record data for 24 hours to capture a full diurnal cycle. Noise is then separated from potential hydrocarbon reservoir related signals utilizing methods in the time, frequency, and spatial domains. After analyzing the data for possible near surface effects, statistical attributes are calculated that can be related to hydrocarbon potential and ultimately to reservoir parameters. These LF attributes are displayed in either profile or maps, depending on the acquisition geometry. 

Geochemical Sampling Technology: Before drilling well its always wise to have a geochemical analysis and use geochenical interpretation with seismic interpretation to arrive at more accurate well locations. It can save billions.
A patented Passive seismic diffusion module is used for Geochemical Sampling .Inside each module is an adsorbent structure engineered to be hydrophobic and to collect a wide variety of volatile inorganic and organic compounds ranging from C2 (ethane) to C20 phytane. Each module and its jar are labeled with a unique serial number. An adhesive device attached to the jar and lid will indicate if it has been opened. 
These adsorbents are protected by a sealed durable microporous tube of polytetrafluoroethylene. This inert membrane has pores 1000 times larger than the molecules to be collected and an open area of over 80% making it essentially transparent to the gas molecules. Yet these hydrophobic pores are small enough to reject soil and water to a depth of over 30 feet keeping the adsorbents clean and eliminate water contact. As a result, these Sorbers can be used in air, soil, saturated sediments, or directly in water. When placed in water, compounds will partition across the membrane following Henry's Law and be adsorbed almost instantaneously. 
The engineered adsorbents inside the module are designed to physically fit inside the thermal desorption apparatus for analysis by GC/MS.  The adsorbents are manufactured to have low levels of background signal and withstand the heat of the thermal desorption.  In addition, a duplicate sample is housed in each module and used in the event the first sample is lost due to a lab problem such as a power outage or if additional information is needed.
The interpretation of features mapped using a seismic data set is based on acoustic impedance differences of subsurface material. A towed seismic source emits acoustic energy at regular intervals. The transmitted acoustic energy is reflected from boundaries between mediums of different acoustic impedances (e.g. the water-sediment interface). The interface between different materials is displayed graphically in a seismic record, based on the time it takes the transmitted acoustic energy to travel from the source to each interface and back to the receiver, termed two-way travel time. If the speed of sound within the material is known, depth to each interface can be calculated.



Zero Phase Data
Zero phase is the objective of almost all seismic data processing today and its interpretive benefits are well known, however it is difficult to achieve. No more than 50% of seismic data achieves zero phase sufficiently closely for its benefits and accuracy to be properly enjoyed. Furthermore, 90 degree phase is a remarkably common accident and, if not identified, can cause havoc to detailed interpretation.                                                   


All interpreters should know how to visually assess the phase and polarity of their data. I regularly meet those who discover late in the interpretation that the data has a different phase or opposite polarity to what was first thought. In this paper recommendations for phase and polarity assessment will be made, and several phase circles will be presented. For zero phase data time and amplitude are co-located, and many interpretive procedures on modern workstations are based on this fact. For other phases complications arise, because time and amplitude are in different locations. Suggestions will be offered for handling the all-too-common 90 degree phase data.


Seismic Interpretation Steps
The analysis of seismic data includes steps are as follows
1.   Reflectors Identification,
2.   Picking and Correlation of Reflectors,
3.   Fault Mapping,
4.   Closing Loops,
5.   Velocity Analysis, Digitization,
6.   Time to Depth Conversion,
7.   Constructing Seismogeological Cross-Sections
8.   Attribute Study: Amplitude, Frequency, Phase, Coherency, Spectral Decomposition
9. Construction of the Isochronous and Structural contour maps.
9.   Generation of Prospect Maps(structural/Stratigraphic)
10.  Reserve Estimation(Deterministic/Probabilistic)

1. Reflectors Identification:
It is usually better to start picking reflectors by inspecting seismic sections passing through boreholes. Reflectors are identified through tying the seismic sections to the well data. Composite logs are used to determine the depth to tops of different formations, while sonic logs and/or well velocity survey data are used to define the one-way times on these tops. Consequently two-way times are calculated and used to define the reflecting formation tops on the seismic sections.

Factors to be taken care low reflectivity of Formation top, A nearby reflector should be picked which may overlies the top of that formation , by about 10 msec. Strong reflector Formation is the most powerful events on the seismic sections and exhibits the best continuity. This may be due to the presence of dolomite overlain by clastics
2. Picking and correlation of reflectors: 
Studied horizon was picked up across seismic lines after the reflector identification. Jump correlation process is used to overcome any discontinuity of the reflections along seismic gaps. When reflections on both sides of a gap appear on the same level along the seismic section, this may indicate a bad quality survey. On the other hand, when reflectors are either displaced vertically or disappear, this interruption may be due to faulting or pinch-out, respectively. 
When the reflectors either change their character from high to low amplitude, interpretations are in terms of changes in acoustic contrasts or thickening or thinning of the formations, respectively. It is often possible to correlate across small faults by using strike lines to carry out interpretation around their ends. In the case of major structure, it is usual, in the absence of any other information, to use the reflection character as a guide. The horizon is picked along all seismic grids by correlating the seismic events, tying their times. 
3. Fault location detection: 
Faults of large vertical displacements are easily recognized, especially from the sudden stepping-out of reflections across their planes. Faults with small displacements are traced on the bases of Campbell’s (1965) criteria; namely: correlation of characteristic reflectors, identification of reflection gaps, projection of shallow faults with correlated reflections to deeper levels, variations between drilled holes, geological and geophysical dips, mis closures around a guide unit of seismic control beyond the probable limits of accuracy, dip pattern along several lines of control and diffraction as a mask and clue to faulting.
4. Construction of seismic structural cross-sections: 
To visualize the subsurface structural configuration, interpreted geo-seismiccross-sections were constructed to show the fault patterns affecting the formation.
5. Closing loops: 
It is necessary to introduce faults, unsupported by other evidence, in order to make sure that a picked reflector ties around a loop of survey lines. The coincidence of reflectors around loops of determined time is considered as starting defined time. It is recommended to close loops around surveyed well logs then the adjacent ones depending on goodness of the reflectors. This minimizes personal mistakes where closing loops attend the interpreter step by step.
6. Construction of isochronous map: 
The given seismic data are analyzed primarily in terms of structural elements. This is currently achieved through the picking of seismic reflection horizons of interest and the definition of the operated structural elements. Correlation of seismic events, tying their times, closing their loops, posting their time values and fault segments, constructing the fault pattern and contouring the arrival times are the basic steps followed for seismic interpretation (Coffeen, 1984). Accordingly, the fault cut-outs are picked and posted on their locations on the seismic shot-point location map to establish the fault pattern for the top. Also, the two-way time values are transferred to the map and contoured to construct the isochronous map on the top of the formation. The picked time values and the locations of fault segments are posted on the base map (shot point location map of the study area) in order to construct an isochronous reflection map on the top of the Formation. The structural map characterizing the studied formation top in terms of two way times. The conversion of this map into depth map is carried out using the estimated average velocity using well data.
7. Velocity analysis: 
Formation or Interval Velocity, Vint: The Velocity of a seismic wave travelling through a particular lithological unit or between any two consecutive reflectors and when measure through a very thin spacing can be equal to formation or intrinsic velocity. Using Dix inverse, Vint can be estimated.
Average/True/Vertical Velocity: The average velocity Vav is simply defined as the velocity over certain reflecting surface below the seismic reference datum (Dobrin, 1976).
Vav = Z/t
Where: Z: is the depth to the reflecting surface from the wells, and t: is the one way transit time to the reflector from the same reference. Finally, average velocity is equal to 4589.056 ft/sec.  

In north America there are thousands of wells in a single field, and so many old software are based on well velocities only. But in the sub-continent, we dont find as many wells. The Most reliable velocity is the Well Velocity. RMS/stacking/Migration Velocity, can also be taken as into account to generate Vav or Vint, which in turn be then tied with the well velocity for final velocities.


8. Attribute Study: Amplitude, Frequency, Phase, Coherency, Spectral Decomposition
es to different formation tops must be read along successive shot points and converted into corresponding depths, using the average velocities determined through one or two wells along ea
Coherence attribute blended with seismic amplitude section  which confirms a fault 




Spectral Decomposition Attribute confirming a stratigraphic channel feature




9. Conversion of reflection time to depth:
For construction of Seismogeologic structural cross-sections, average velocity values to tops of different formations from wells are applied by using velocity-depth curves. Reflection times to different formation tops must be read along successive shot points and converted into corresponding depths, using the average velocities determined through one or two wells along each seismic line. The isochronous map and average velocity are used to convert the reflection time to depths, in order to construct the structure contour map . The structural contour map is more meaningful if it shows the structural elements in terms of depth rather than of time.



10. Generation of Prospect Maps(structural/Stratigraphic): 
The most significant part, to map all the prspect location in the map to get the idea of the surface area involved in the exploration

11. Reserve Estimation(Deterministic/Probabilistic): 
The volumetric method entails determining the physical size of the reservoir, the pore volume within the rock matrix, and the fluid content within the void space. This provides an estimate of the hydrocarbons-in-place, from which ultimate recovery can be estimated by using an appropriate recovery factor. Each of the factors used in the calculation have inherent uncertainties that, when combined, cause significant uncertainties in the reserves estimate.



Enter the Bo value - almost always greater than 1.



Bo = Reservoir Barrels / Stock Tank Barrels. The Bo typically ranges from 1.0 to 1.7.


Original Oil in-Place (OOIP) in STB = 7,758 * HCPV / Bo
= [7,758 * GRV * N/G * φ *(l-Sw)] * 1 / Bo

Original Gas in-Place (OGIP) in MMCF = 43,560 * HCPV * Bg
= 43,560 * GRV * N/G * φ* (l-Sw) * Bg
GRV = Gross Rock Volume (acre feet) = A * h
N/G = Percentage of the Gross Rock Volume formed by the reservoir rock ( range is 0 to 1).
A = Drainage area, acres
h = formation thickness, feet
7,758 = API Bbl per acres-feet (converts acre-feet to stock tank barrels)
φ = Porosity, fraction of rock volume available to store liquids
Sw = Volume fraction of porosity filled with interstitial water
Bo = Formation volume factor (Reservoir Bbl/STB)
1/Bo = Shrinkage (STB/reservoir Bbl)
Bg = the volume of gas in the reservoir occupied by a standard cubic foot of gas
at the surface


Monte Carlo method is a technique that involves using random numbers and probability to solve problems.

Monte Carlo simulation is a method for iteratively evaluating a deterministic model using sets of random numbers as inputs. This method is often used when the model is complex, nonlinear, or involves multiple uncertain parameters.

Simply put, Monte Carlo simulation is a method to quantify uncertainty, and is used to determine a probability for the calculated range of estimated reservoir volumes.
To apply Monte Carlo, one needs to have a mathematical model and the Cumulative Distribution Function (CDF) of all the random variables that will be input to the model to find the unknown output distribution. Each input variable needed in the model is randomly sampled and the model is used to calculate the unknown quantity or random variable. This process is repeated a sufficient number of times until it generates a probability distribution for the or random variable. This process is called Monte Carlo Simulation.

Monte Carlo Simulation is almost used in any industry to analyze the risk involved in such a task which involves a huge investment(billions of dollars). Oil industry also uses this simulation technology as its basic requirement for financial studies. It takes input of various data and finally statistical analysis.


No comments:

Post a Comment