cal - Calculation algorithms
Name
cal — Calculate various functions and algorithmic features
Syntax
Calculation of the z-value or the scalar-value of a dataset in active memory:
cal [sca] sin | cos | tan | atan | exp | ln | log | abs | pow | sqr | min | max | mul | div | add | sub | fix
Calculation of various algorithms and graphical features:
cal spi | smo | ang ; # calculation of spikes, or smoothing, or angles in line data
cal cut | cpo | cli | ccl | cll ; # calculate cut operations in various data sets
cal sva [zvalue | cur] [ext] ; # calculate sample values in stacking velocities
cal sort x | y | z | s [dsc] ; # Sort data in coordinate direction
cal stdev grid_in_workspace [rem | use] ; # calculate standard deviation of points towards a grid surface
cal aiv [sca | fd fielddata] ; # Generate absolute increasing values in z or optionally in scalars or field data.
cal check [cur] ; # calculate the regridded grid according to decimation and zoom, or use a lookup curve
cal xbo [add addvalue] [fix fixvalue] ; # calculate an extra border around a grid
cal xgr [num number] [bol] ; # Expand grid in undefined area
cal sba ; # calculate scalebar
cal rgr ; # calculate and create a regression line
cal morf [hi morfname] ; # will morf a polydata in active according to the morf dataset
Description
The *cal* command has two sections:
- calculation of the z-value or the scalar-value of a dataset in active data.
- calculation of various algorithms and graphical features.
Arguments
Calculation of the z-value or the scalar-value of a dataset in active data
sca
The operation is performed on the scalar-value instead of the z-value
sin
z = sin(z)
cos
z = cos(z)
tan
z = tan(z)
atan
z = atan(z)
exp
z = exp(z)
ln
z = ln(z)
log
z = log(z)
abs
z = abs(z)
pow
z = pow(z) (= z*z) ; # squared
min
z = min(z) = [gvar zmin]
max
z = max(z) = [gvar zmax]
mul value
z = z * value
div value
z = z / value
add value
z = z + value
sub value
z = z - value
fix
z = value
Calculation of spikes, or smoothing, or angles in line data
cal spi | smo | ang
spi [testdistance] [rem] [fau]
Three consecutive points are tested for spikes. Point 2 in the sequence 1 2 3 is tested against 1 and 3. The results are saved and displayed in workspace spikes. The scalar values will contain the spike distance.
rem
Remove spike points.
fau
Test point 1 or 3 against the two others to be a spike.
smo [distance] [fct]
Lines with noise can be smoothed using an algorithm where each point is averaged over a distance to both sides of the points. Default distance is 0.05 of the window area diagonal.
distance
Real distance within the smooth limit.
fct factor
A factor multiplied with the distance.
ang [0 | 1 | 2 | cur] [deg]
Calculate angles in a line for a point to the next.
[0 | 1 | 2 | cur]
0:From point to next. 1:From point to previous. 2:Average of 0 and 1. cur: Between two last cursor positions.
deg
Results are in degrees. Defaults in radians. The results are put into the scalars. The x,y and z angles in degrees are saved in [gvar xpos] [gvar ypos] [gvar zpos]. All angles are projected into the xy plane.
morf [hi morfname]
Can transform (morf) polydata that is incorrect digitized or positioned. Morfing is done according to a morf dataset in workspace morf (or hi morfname). The morf dataset consists of pairs of point positions: old position - new position; i.e. digitize a position and then the new position as many pairs needed. Place this dataset in workspace morf and place the polydata dataset to be morfed in active.
Calculate cut operations in various datasets
cal cut | cpo | cli | ccl | cll
cut level
Calculate a line at a level on dataset in active.
cpo hiname
Calculate point(s) for intercepts between a line in active and a grid surface in workspace hiname. Result is one or more point(s).
cli hiname
Calculate crossing lines between active dataset and grid in workspace hiname. Result is a line dataset in active.
ccl hiname
Calculate crossing lines using contouring between active dataset and grid in workspace hiname. Result is a line dataset in active.
cll [zero]
If no workspace name, the command will calculate crossing points between lines in active dataset. Result is a line dataset in active that is generated between the crossing points. There is also a copy of the crossing points in workspace xpoints. The cross points will be generated in all lines and the scalars will contain the difference values between the lines where they cross in x and y position.
The zero argument will create all the cross points at level zero and combine coinciding points. The scalars will contain the difference between lines, also called missties.
cll hiname
Calculate crossing points of lines between active dataset and line in workspace hiname. Result is a line dataset in active that is generated between the crossing points. If the lines do not cross exactly in 3D space, the z coordinate will be correct only for the dataset in active. The scalars will contain the difference values between the lines where they cross in x and y position.
Calculate sample values in stacking velocities
cal sva [zvalue | cur] [ext]
cal sva zvalue
Will calculate the crossing values between stacking velocities abd the z level value zvalue
cal sva cur
Will calculate the crossing values between stacking velocities abd the z level value given by the cursor position.
cal ... ext
Use extrapolation of scalar (velocity) values if a stacking velocity line is outside the z value.
Sort data in coordinate direction
cal sort x | y | z | s [dsc]
Sort data in coordinate direction x / y / z / s in acending (or dsc descending) order.
Sort data in nearest point direction
cal sort nea
Sort data in direction of the nearest point. Use first cal sort x | y | z | s [dsc] to get an initial correct starting point.
Below is a cross section line that first is sorted in the x direction. It still has some problem areas where it bends back on itself. Then it is sorted in the nearest point direction.
Line sorted in x direction and then in nearest point direction
Calculate standard deviation of points towards a grid surface
cal stdev grid_in_workspace
A set of points is in active and a grid is placed in grid_in_workspace. Standard deviation and variance are calculated for the distance og the points down to the surface and saved in @standev and @var. These values are also listed in the message area.
cal stdev grid_in_workspace rem | use [fct factor] [lt]
rem: Points are removed if they are further away than the standard_deviation. Or if [fct] factor : standard_deviation * factor.
lt: Points are removed if their distance to the surface are lower than standard_deviation * factor. That will get the erroneous points.
use: Means that the removal operation will use the previous generated standard deviation.
Generate absolute increasing values in z or optionally in scalars or field data
cal aiv [sca | fd fielddata]
The first z (or scalar or field data) value is kept. If the next is lower, that one is rejected and so on for the whole file.
The cal aiv could be used to correct a stacking velocity dataset that is expected to have only increasing velocity values.
Calculate the regridded grid according to decimation and zoom
cal check
A grid will be subjected to decimation when it is displayed. cal check will calculate the regridded grid according to decimation and zoom
cal check cur
The decimation will be done using a lookup curve and regridding at that decimation value is performed.
Calculate extra border or cells in a grid
cal xbo [add addvalue] [fix fixvalue]
To calculate an extra border around a grid.
add addvalue means to add a value addvalue to the new grid cell rim.
fix fixvalue means to fix a value fixvalue to the new grid cell rim.
cal xgr [num number] [bol]
To calculate a number of extra cells in the undefined area around a grid.
num number specifies the number of extra cells.
bol means to draw a boundary line along with the expansion.
Calculate and display a scale bar
A scale bar will be generated at the lower part of the graphical window. Many options exists to determine the extent and appearance of the scale bar. The scale bar can be accessed from Tools->scalebar.
cal sba [flip | int | num | nosha | col | pcol | anno | km | div | size | thick | cur | coor | adj | bor]
cal sba will calculate and display the default scale bar. The following options will change the appearance.
flip will reverse (flip) the scale bar shading.
int interval will use interval in the scale bar.
num number_of_bars for setting the number of bars in the scale bar.
nosha will not use shaded scale bars, but a simple line version.
col whi | red | gre |... for setting the color of the scale bar.
pcol whi | red | gre |... for setting the annotation color in the scale bar.
anno annotation_text for setting the annotation text in the scale bar.
km | KM | Km | kilometer for using kilometer in the scale bar. Default is meter. Using kilometer the annotation values are divided by 1000.
div division for setting the division number for annotation points in the scale bar.
size size_of_annotation for setting the annotation size for annotation points in the scale bar.
thick thickness_factor_of_scale bar for setting the scale bar thickness factor relative to the default thickness. thickness_factor_of_scalebar keeps the default thickness.
cur for setting the cursor point as start point of the scale bar.
coor xcoor ycoor zcoor for setting the placement of the scalebar using world coordinates.
adj adjust_location_value for adjusting the default placement of the scalebar. The adjust_location_value is given in user coordinates.
bor whi | red | gre |... for activating and setting the border color in the scalebar.
Calculate regression line for a point data set.
cal rgr
Input are x,y points (z is neglected). Output is a straight regression line that it the best fit. The slope is in [gvar slope] and the intercept is in [gvar intercept].
Examples
Ex.1: Removal of spikes in a single line
# remove spikes by a distance test cal spi 500 rem
The example above filtered spikes in the line shown in the image below.
Ex.2: Removal of spikes for multibeam data
# remove spikes by a standard deviation test z ; # erase the screen win demo ; # get demo window mak ran 22 ; # make random points grp 222 222 ; # generate surface col tur ; # select turqouise color dis ; # display the surface mhi gg ; # save in gg mak ran 444444 ; # make a large number of random points mhi pp ; # save in pp cal stdev gg rem 0.1 ; # calculate standard dev. and remove points greater than 0.1 * stand.dev poi map ; # display remaining points cal stdev gg ; # calculate standard dev. of remaing points
The example above filtered spikes in the large random dataset.
Initial standard deviation was calculated to be: 1049.87 and number of points remaining was: 30843 which again have standard deviation: 60.4146
Ex.3: Sinus calculation from a grid
# Demonstration of sinus calculation from a grid vie 2 1 1 ;# make 2 viewports mak ran 11 ;# make 11 random points grp 222 222 ;# make grid of dimension 222 x 222 map ;# map the grid cco map ;# color code legend with map option vie 2 ;# select viewport 2 cal sin ;# calculate sinus values map ;# map the new grid cco map tit "Sinus" ;# color code legend and title
The above example produces this image.
Grid surface to the left and the sinus map to the right
See also
add - Add to dataset, sub - Subtract operation, mul - Multiply operation, div - Divide operation, sca - Scale data, mak nsc