Module gwy :: Class DataField
[hide private]
[frames] | no frames]

Class DataField

source code

Two-dimensional data representation

DataField is an object that is used for representation of all two-dimensional data matrices. Most of the basic data handling and processing functions in Gwyddion are declared here as they are connected with DataField.

Instance Methods [hide private]
 
__init__(xres, yres, xreal, yreal, nullme)
Creates a new data field.
source code
 
sum_fields(operand1, operand2)
Sums two data fields.
source code
 
subtract_fields(operand1, operand2)
Subtracts one data field from another.
source code
 
divide_fields(operand1, operand2)
Divides one data field with another.
source code
 
multiply_fields(operand1, operand2)
Multiplies two data fields.
source code
 
min_of_fields(operand1, operand2)
Finds point-wise maxima of two data fields.
source code
 
max_of_fields(operand1, operand2)
Finds point-wise minima of two data fields.
source code
 
hypot_of_fields(operand1, operand2)
Finds point-wise hypotenuse of two data fields.
source code
 
linear_combination(coeff1, operand1, coeff2, operand2, constant)
Computes point-wise general linear combination of two data fields.
source code
 
check_compatibility(data_field2, check)
Checks whether two data fields are compatible.
source code
 
check_compatibility_with_brick_xy(brick, check)
Checks whether a data field is compatible with brick XY-planes.
source code
 
check_compatibility_with_lawn(lawn, check) source code
 
extend(left, right, up, down, exterior, fill_value, keep_offsets)
Creates a new data field by extending another data field using the specified method of exterior handling.
source code
 
laplace_solve(mask, grain_id, qprec)
Replaces masked areas by the solution of Laplace equation.
source code
 
correct_laplace_iteration(mask_field, buffer_field, corrfactor)
Performs one interation of Laplace data correction.
source code
 
correct_average(mask_field)
Fills data under mask with the average value.
source code
 
correct_average_unmasked(mask_field)
Fills data under mask with the average value of unmasked data.
source code
 
mask_outliers(mask_field, thresh)
Creates mask of data that are above or below thresh*sigma from average height.
source code
 
mask_outliers2(mask_field, thresh_low, thresh_high)
Creates mask of data that are above or below multiples of rms from average height.
source code
 
distort(dest, invtrans, user_data, interp, exterior, fill_value)
Distorts a data field in the horizontal plane.
source code
 
sample_distorted(dest, coords, interp, exterior, fill_value)
Resamples a data field in an arbitrarily distorted manner.
source code
 
mark_scars(result, threshold_high, threshold_low, min_scar_len, max_scar_width, negative)
Find and marks scars in a data field.
source code
 
subtract_row_shifts(shifts)
Shifts entire data field rows as specified by given data line.
source code
 
find_row_shifts_trimmed_mean(mask, masking, trimfrac, mincount)
Finds row shifts to misaligned row correction using trimmed row means.
source code
 
find_row_shifts_trimmed_diff(mask, masking, trimfrac, mincount)
Finds row shifts to misaligned row correction using trimmed means of row differences.
source code
 
get_correlation_score(kernel_field, col, row, kernel_col, kernel_row, kernel_width, kernel_height)
Calculates a correlation score in one point.
source code
 
get_weighted_correlation_score(kernel_field, weight_field, col, row, kernel_col, kernel_row, kernel_width, kernel_height)
Calculates a correlation score in one point using weights to center the used information to the center of kernel.
source code
 
crosscorrelate(data_field2, x_dist, y_dist, score, search_width, search_height, window_width, window_height)
Algorithm for matching two different images of the same object under changes.
source code
 
crosscorrelate_init(data_field2, x_dist, y_dist, score, search_width, search_height, window_width, window_height)
Initializes a cross-correlation iterator.
source code
 
correlate(kernel_field, score, method)
Computes correlation score for all positions in a data field.
source code
 
correlate_init(kernel_field, score)
Creates a new correlation iterator.
source code
 
correlation_search(kernel, kernel_weight, target, method, regcoeff, exterior, fill_value)
Performs correlation search of a detail in a larger data field.
source code
 
invalidate()
Invalidates cached data field stats.
source code
 
new_alike(nullme)
Creates a new data field similar to an existing one.
source code
 
data_changed()
Emits signal "data-changed" on a data field.
source code
 
new_resampled(xres, yres, interpolation)
Creates a new data field by resampling an existing one.
source code
 
resample(xres, yres, interpolation)
Resamples a data field using given interpolation method
source code
 
bin(target, binw, binh, xoff, yoff, trimlowest, trimhighest)
Bins a data field into another data field.
source code
 
new_binned(binw, binh, xoff, yoff, trimlowest, trimhighest)
Creates a new data field by binning an existing one.
source code
 
resize(ulcol, ulrow, brcol, brrow)
Resizes (crops) a data field.
source code
 
area_extract(col, row, width, height)
Extracts a rectangular part of a data field to a new data field.
source code
 
copy(dest, nondata_too)
Copies the contents of an already allocated data field to a data field of the same size.
source code
 
area_copy(dest, col, row, width, height, destcol, destrow)
Copies a rectangular area from one data field to another.
source code
 
get_xres()
Gets X resolution (number of columns) of a data field.
source code
 
get_yres()
Gets Y resolution (number of rows) of the field.
source code
 
get_xreal()
Gets the X real (physical) size of a data field.
source code
 
get_yreal()
Gets the Y real (physical) size of a data field.
source code
 
set_xreal(xreal)
Sets X real (physical) size value of a data field.
source code
 
set_yreal(yreal)
Sets Y real (physical) size value of a data field.
source code
 
get_dx()
Gets the horizontal pixel size of a data field in real units.
source code
 
get_dy()
Gets the vertical pixel size of a data field in real units.
source code
 
get_xoffset()
Gets the X offset of data field origin.
source code
 
get_yoffset()
Gets the Y offset of data field origin.
source code
 
set_xoffset(xoff)
Sets the X offset of a data field origin.
source code
 
set_yoffset(yoff)
Sets the Y offset of a data field origin.
source code
 
get_si_unit_xy()
Returns lateral SI unit of a data field.
source code
 
get_si_unit_z()
Returns value SI unit of a data field.
source code
 
set_si_unit_xy(si_unit)
Sets the SI unit corresponding to the lateral (XY) dimensions of a data field.
source code
 
set_si_unit_z(si_unit)
Sets the SI unit corresponding to the "height" (Z) dimension of a data field.
source code
 
get_value_format_xy(style)
Finds value format good for displaying coordinates of a data field.
source code
 
get_value_format_z(style)
Finds value format good for displaying values of a data field.
source code
 
copy_units(target)
Sets lateral and value units of a data field to match another data field.
source code
 
copy_units_to_data_line(data_line)
Sets lateral and value units of a data line to match a data field.
source code
 
itor(row)
Transforms vertical pixel coordinate to real (physical) Y coordinate.
source code
 
jtor(col)
Transforms horizontal pixel coordinate to real (physical) X coordinate.
source code
 
rtoi(realy)
Transforms real (physical) Y coordinate to row.
source code
 
rtoj(realx)
Transforms real (physical) X coordinate to column.
source code
 
get_val(col, row)
Gets value at given position in a data field.
source code
 
set_val(col, row, value)
Sets value at given position in a data field.
source code
 
get_dval(x, y, interpolation)
Gets interpolated value at arbitrary data field point indexed by pixel coordinates.
source code
 
get_dval_real(x, y, interpolation)
Gets interpolated value at arbitrary data field point indexed by real coordinates.
source code
 
rotate(angle, interpolation)
Rotates a data field by a given angle.
source code
 
new_rotated(exterior_mask, angle, interp, resize)
Creates a new data field by rotating a data field by an atribtrary angle.
source code
 
new_rotated_90(clockwise)
Creates a new data field by rotating a data field by 90 degrees.
source code
 
invert(x, y, z)
Reflects and/or inverts a data field.
source code
 
flip_xy(dest, minor)
Copies data from one data field to another with transposition.
source code
 
area_flip_xy(col, row, width, height, dest, minor)
Copies data from a rectangular part of one data field to another with transposition.
source code
 
fill(value)
Fills a data field with given value.
source code
 
clear()
Fills a data field with zeroes.
source code
 
multiply(value)
Multiplies all values in a data field by given value.
source code
 
add(value)
Adds given value to all values in a data field.
source code
 
abs()
Takes absolute value of all values in a data field.
source code
 
area_fill(col, row, width, height, value)
Fills a rectangular part of a data field with given value.
source code
 
area_fill_mask(mask, mode, col, row, width, height, value)
Fills a masked rectangular part of a data field with given value.
source code
 
area_clear(col, row, width, height)
Fills a rectangular part of a data field with zeroes.
source code
 
area_multiply(col, row, width, height, value)
Multiplies values in a rectangular part of a data field by given value
source code
 
area_add(col, row, width, height, value)
Adds given value to all values in a rectangular part of a data field.
source code
 
area_abs(col, row, width, height)
Takes absolute value of values in a rectangular part of a data field.
source code
 
get_profile(scol, srow, ecol, erow, res, thickness, interpolation)
Extracts a possibly averaged profile from data field to a data line.
source code
 
get_row(data_line, row)
Extracts a data field row into a data line.
source code
 
get_column(data_line, col)
Extracts a data field column into a data line.
source code
 
set_row(data_line, row)
Sets a row in the data field to values of a data line.
source code
 
set_column(data_line, col)
Sets a column in the data field to values of a data line.
source code
 
get_row_part(data_line, row, from_, to)
Extracts part of a data field row into a data line.
source code
 
get_column_part(data_line, col, from_, to)
Extracts part of a data field column into a data line.
source code
 
set_row_part(data_line, row, from_, to)
Puts a data line into a data field row.
source code
 
set_column_part(data_line, col, from_, to)
Puts a data line into data field column.
source code
 
get_xder(col, row)
Computes central derivative in X direction.
source code
 
get_yder(col, row)
Computes central derivative in Y direction.
source code
 
get_angder(col, row, theta)
Computes derivative in direction specified by given angle.
source code
 
average_xyz(density_map, points, npoints)
Fills a data field with regularised XYZ data using a simple method.
source code
 
xdwt(wt_coefs, direction, minsize)
Performs steps of the X-direction image wavelet decomposition.
source code
 
ydwt(wt_coefs, direction, minsize)
Performs steps of the Y-direction image wavelet decomposition.
source code
 
dwt(wt_coefs, direction, minsize)
Performs steps of the 2D image wavelet decomposition.
source code
 
dwt_mark_anisotropy(mask, wt_coefs, ratio, lowlimit)
Performs steps of the 2D image wavelet decomposition.
source code
 
elliptic_area_fill(col, row, width, height, value)
Fills an elliptic region of a data field with given value.
source code
 
get_elliptic_intersection(col, row, width, height)
Calculates an upper bound of the number of samples in an elliptic region intersecting a data field.
source code
 
circular_area_fill(col, row, radius, value)
Fills an elliptic region of a data field with given value.
source code
 
normalize()
Normalizes data in a data field to range 0.0 to 1.0.
source code
 
renormalize(range, offset)
Transforms data in a data field with linear function to given range.
source code
 
area_renormalize(col, row, width, height, range, offset)
Transforms data in a part of a data field with linear function to given range.
source code
 
threshold(threshval, bottom, top)
Tresholds values of a data field.
source code
 
area_threshold(col, row, width, height, threshval, bottom, top)
Tresholds values of a rectangular part of a data field.
source code
 
clamp(bottom, top)
Limits data field values to a range.
source code
 
area_clamp(col, row, width, height, bottom, top)
Limits values in a rectangular part of a data field to a range.
source code
 
area_gather(result, buffer, hsize, vsize, average, col, row, width, height)
Sums or averages values in reactangular areas around each sample in a data field.
source code
 
convolve(kernel_field)
Convolves a data field with given kernel.
source code
 
area_convolve(kernel_field, col, row, width, height)
Convolves a rectangular part of a data field with given kernel.
source code
 
fft_convolve(kernel_field)
Convolves a data field with given kernel of the same size using FFT.
source code
 
area_ext_convolve(col, row, width, height, target, kernel, exterior, fill_value, as_integral)
Convolve a field with a two-dimensional kernel.
source code
 
convolve_1d(kernel_line, orientation)
Convolves a data field with given linear kernel.
source code
 
area_convolve_1d(kernel_line, orientation, col, row, width, height)
Convolves a rectangular part of a data field with given linear kernel.
source code
 
area_ext_row_convolve(col, row, width, height, target, kernel, exterior, fill_value, as_integral)
Convolve a field row-wise with a one-dimensional kernel.
source code
 
filter_median(size)
Filters a data field with median filter.
source code
 
area_filter_median(size, col, row, width, height)
Filters a rectangular part of a data field with median filter.
source code
 
filter_mean(size)
Filters a data field with mean filter of size size.
source code
 
area_filter_mean(size, col, row, width, height)
Filters a rectangular part of a data field with mean filter of size size.
source code
 
filter_conservative(size)
Filters a data field with conservative denoise filter.
source code
 
area_filter_conservative(size, col, row, width, height)
Filters a rectangular part of a data field with conservative denoise filter.
source code
 
filter_laplacian()
Filters a data field with Laplacian filter.
source code
 
area_filter_laplacian(col, row, width, height)
Filters a rectangular part of a data field with Laplacian filter.
source code
 
filter_laplacian_of_gaussians()
Filters a data field with Laplacian of Gaussians filter.
source code
 
area_filter_laplacian_of_gaussians(col, row, width, height)
Filters a rectangular part of a data field with Laplacian of Gaussians filter.
source code
 
filter_sobel(orientation)
Filters a data field with a directional Sobel filter.
source code
 
area_filter_sobel(orientation, col, row, width, height)
Filters a rectangular part of a data field with a directional Sobel filter.
source code
 
filter_sobel_total()
Filters a data field with total Sobel filter.
source code
 
filter_prewitt(orientation)
Filters a data field with Prewitt filter.
source code
 
area_filter_prewitt(orientation, col, row, width, height)
Filters a rectangular part of a data field with a directional Prewitt filter.
source code
 
filter_prewitt_total()
Filters a data field with total Prewitt filter.
source code
 
filter_slope(xder, yder)
Calculates x and y derivaties for an entire field.
source code
 
filter_gauss_step(sigma)
Processes a data field with Gaussian step detection filter.
source code
 
filter_dechecker()
Filters a data field with 5x5 checker pattern removal filter.
source code
 
area_filter_dechecker(col, row, width, height)
Filters a rectangular part of a data field with 5x5 checker pattern removal filter.
source code
 
filter_gaussian(sigma)
Filters a data field with a Gaussian filter.
source code
 
area_filter_gaussian(sigma, col, row, width, height)
Filters a rectangular part of a data field with a Gaussian filter.
source code
 
row_gaussian(sigma)
Filters a data field with a Gaussian filter in horizontal direction.
source code
 
column_gaussian(sigma)
Filters a data field with a Gaussian filter in vertical direction.
source code
 
filter_minimum(size)
Filters a data field with minimum filter.
source code
 
area_filter_minimum(size, col, row, width, height)
Filters a rectangular part of a data field with minimum filter.
source code
 
filter_maximum(size)
Filters a data field with maximum filter.
source code
 
area_filter_maximum(size, col, row, width, height)
Filters a rectangular part of a data field with maximum filter.
source code
 
area_filter_min_max(kernel, filtertype, col, row, width, height)
Applies a morphological operation with a flat structuring element to a part of a data field.
source code
 
area_filter_disc_asf(radius, closing, col, row, width, height)
Applies an alternating sequential morphological filter with a flat disc structuring element to a part of a data field.
source code
 
area_filter_kth_rank(kernel, col, row, width, height, k)
Applies a k-th rank filter to a part of a data field.
source code
 
area_filter_trimmed_mean(kernel, col, row, width, height, nlowest, nhighest)
Applies a trimmed mean filter to a part of a data field.
source code
 
filter_rms(size)
Filters a data field with RMS filter.
source code
 
area_filter_rms(size, col, row, width, height)
Filters a rectangular part of a data field with RMS filter of size size.
source code
 
filter_kuwahara()
Filters a data field with Kuwahara filter.
source code
 
area_filter_kuwahara(col, row, width, height)
Filters a rectangular part of a data field with a Kuwahara (edge-preserving smoothing) filter.
source code
 
filter_canny(threshold)
Filters a rectangular part of a data field with canny edge detector filter.
source code
 
shade(target_field, theta, phi)
Shades a data field.
source code
 
filter_harris(y_gradient, result, neighbourhood, alpha)
Applies Harris corner detection filter to a pair of gradient data fields.
source code
 
deconvolve_regularized(operand, out, sigma)
Performs deconvolution of a data field using a simple regularization.
source code
 
deconvolve_psf_leastsq(operand, out, sigma, border)
Performs reconstruction of transfer function from convolved and ideal sharp images.
source code
 
find_regularization_sigma_for_psf(ideal)
Finds regularization parameter for point spread function calculation using regularized deconvolution.
source code
 
find_regularization_sigma_leastsq(ideal, width, height, border)
Finds regularization parameter for point spread function calculation using least squares method.
source code
 
fractal_partitioning(xresult, yresult, interpolation)
Computes data for log-log plot by partitioning.
source code
 
fractal_cubecounting(xresult, yresult, interpolation)
Computes data for log-log plot by cube counting.
source code
 
fractal_triangulation(xresult, yresult, interpolation)
Computes data for log-log plot by triangulation.
source code
 
fractal_psdf(xresult, yresult, interpolation)
Computes data for log-log plot by spectral density method.
source code
 
fractal_correction(mask_field, interpolation)
Replaces data under mask with interpolated values using fractal interpolation.
source code
 
grains_mark_curvature(grain_field, threshval, below)
Marks data that are above/below curvature threshold.
source code
 
grains_mark_watershed(grain_field, locate_steps, locate_thresh, locate_dropsize, wshed_steps, wshed_dropsize, prefilter, below)
Performs watershed algorithm.
source code
 
grains_remove_grain(col, row)
Removes one grain at given position.
source code
 
grains_extract_grain(col, row)
Removes all grains except that one at given position.
source code
 
grains_remove_by_number(number)
Removes grain identified by number.
source code
 
grains_remove_by_size(size)
Removes all grains below specified area.
source code
 
grains_remove_by_height(grain_field, threshval, below)
Removes grains that are higher/lower than given threshold value.
source code
 
grains_remove_touching_border()
Removes all grains that touch field borders.
source code
 
grains_watershed_init(grain_field, locate_steps, locate_thresh, locate_dropsize, wshed_steps, wshed_dropsize, prefilter, below)
Initializes the watershed algorithm.
source code
 
grains_mark_height(grain_field, threshval, below)
Marks data that are above/below height threshold.
source code
 
grains_mark_slope(grain_field, threshval, below)
Marks data that are above/below slope threshold.
source code
 
otsu_threshold()
Finds Otsu's height threshold for a data field.
source code
 
grains_add(add_field)
Adds add_field grains to grain_field.
source code
 
grains_intersect(intersect_field)
Performs intersection betweet two grain fields, result is stored in grain_field.
source code
 
grains_invert()
Inverts a data field representing a mask.
source code
 
grains_autocrop(symmetrically)
Removes empty border rows and columns from a data field representing a mask.
source code
 
area_grains_tgnd(target_line, col, row, width, height, below, nstats)
Calculates threshold grain number distribution.
source code
 
area_grains_tgnd_range(target_line, col, row, width, height, min, max, below, nstats)
Calculates threshold grain number distribution in given height range.
source code
 
grains_splash_water(minima, locate_steps, locate_dropsize) source code
 
grain_distance_transform()
Performs Euclidean distance transform of a data field with grains.
source code
 
grain_simple_dist_trans(dtype, from_border)
Performs a distance transform of a data field with grains.
source code
 
grains_shrink(amount, dtype, from_border)
Erodes a data field containing mask by specified amount using a distance measure.
source code
 
grains_grow(amount, dtype, prevent_merging)
Dilates a data field containing mask by specified amount using a distance measure.
source code
 
grains_thin()
Performs thinning of a data field containing mask.
source code
 
fill_voids(nonsimple)
Fills voids in grains in a data field representing a mask.
source code
 
mark_extrema(extrema, maxima)
Marks local maxima or minima in a two-dimensional field.
source code
 
hough_line(x_gradient, y_gradient, result, hwidth, overlapping) source code
 
hough_circle(x_gradient, y_gradient, result, radius) source code
 
hough_line_strenghten(x_gradient, y_gradient, hwidth, threshold) source code
 
hough_circle_strenghten(x_gradient, y_gradient, radius, threshold) source code
 
hough_polar_line_to_datafield(rho, theta) source code
 
zoom_fft(isrc, rdest, idest, mx, my, fx0, fy0, fx1, fy1)
Computes Zoom FFT of a data field.
source code
 
fft1d(iin, rout, iout, orientation, windowing, direction, interpolation, preserverms, level)
Transforms all rows or columns in a data field with Fast Fourier Transform.
source code
 
area_1dfft(iin, rout, iout, col, row, width, height, orientation, windowing, direction, interpolation, preserverms, level)
Transforms all rows or columns in a rectangular part of a data field with Fast Fourier Transform.
source code
 
fft1d_raw(iin, rout, iout, orientation, direction)
Transforms all rows or columns in a data field with Fast Fourier Transform.
source code
 
fft2d(iin, rout, iout, windowing, direction, interpolation, preserverms, level)
Calculates 2D Fast Fourier Transform of a rectangular a data field.
source code
 
area_2dfft(iin, rout, iout, col, row, width, height, windowing, direction, interpolation, preserverms, level)
Calculates 2D Fast Fourier Transform of a rectangular area of a data field.
source code
 
fft2d_raw(iin, rout, iout, direction)
Calculates 2D Fast Fourier Transform of a data field.
source code
 
fft2d_humanize()
Rearranges 2D FFT output to a human-friendly form.
source code
 
fft2d_dehumanize()
Rearranges 2D FFT output back from the human-friendly form.
source code
 
fft_postprocess(humanize)
Updates units, dimensions and offsets for a 2D FFT-processed field.
source code
 
fft_filter_1d(result_field, weights, orientation, interpolation)
Performs 1D FFT filtering of a data field.
source code
 
fft_window(windowing)
Performs two-dimensional windowing of a data field in preparation for 2D FFT.
source code
 
fft_window_1d(orientation, windowing)
Performs row-wise or column-wise windowing of a data field in preparation for 1D FFT.
source code
 
cwt(interpolation, scale, wtype)
Computes a continuous wavelet transform (CWT) at given scale and using given wavelet.
source code
 
area_fit_plane(mask, col, row, width, height)
Fits a plane through a rectangular part of a data field.
source code
 
fit_plane()
Fits a plane through a data field.
source code
 
fit_facet_plane(mfield, masking)
Calculates the inclination of a plane close to the dominant plane in a data field.
source code
 
plane_level(a, bx, by)
Subtracts plane from a data field.
source code
 
plane_rotate(xangle, yangle, interpolation)
Performs rotation of plane along x and y axis.
source code
 
fit_lines(col, row, width, height, degree, exclude, orientation)
Independently levels profiles on each row/column in a data field.
source code
 
area_local_plane_quantity(size, col, row, width, height, type, result)
Convenience function to get just one quantity from DataField.area_fit_local_planes().
source code
 
local_plane_quantity(size, type, result)
Convenience function to get just one quantity from DataField.fit_local_planes().
source code
 
mfm_perpendicular_stray_field(out, height, thickness, sigma, walls, wall_delta)
Calculates stray field for perpendicular media, based on a mask showing the magnetisation orientation.
source code
 
mfm_perpendicular_stray_field_angle_correction(angle, orientation)
Performs correction of magnetic data for cantilever tilt.
source code
 
mfm_perpendicular_medium_force(fz, type, mtip, bx, by, length)
Calculates force as evaluated from z-component of the magnetic field for a given probe type.
source code
 
mfm_shift_z(out, zdiff)
Shifts magnetic field to a different lift height above the surface.
source code
 
mfm_find_shift_z(shifted, zdiffmin, zdiffmax)
Estimates the height difference between two magnetic field images.
source code
 
mfm_parallel_medium(height, size_a, size_b, size_c, magnetisation, thickness, component)
Calculates magnetic field or its derivatives above a simple medium consisting of stripes of left and right direction magnetisation.
source code
 
mfm_current_line(height, width, position, current, component)
Calculates magnetic field or its derivatives above a flat current line (stripe).
source code
 
get_max()
Finds the maximum value of a data field.
source code
 
get_min()
Finds the minimum value of a data field.
source code
 
get_min_max()
Finds minimum and maximum values of a data field.
source code
 
get_avg()
Computes average value of a data field.
source code
 
get_rms()
Computes root mean square value of a data field.
source code
 
get_mean_square()
Computes mean square value of a data field.
source code
 
get_sum()
Sums all values in a data field.
source code
 
get_median()
Computes median value of a data field.
source code
 
get_surface_area()
Computes surface area of a data field.
source code
 
get_surface_slope()
Computes root mean square surface slope (Sdq) of a data field.
source code
 
get_variation()
Computes the total variation of a data field.
source code
 
get_entropy()
Computes the entropy of a data field.
source code
 
get_entropy_2d(yfield)
Computes the entropy of a two-dimensional point cloud.
source code
 
area_get_max(mask, col, row, width, height)
Finds the maximum value in a rectangular part of a data field.
source code
 
area_get_min(mask, col, row, width, height)
Finds the minimum value in a rectangular part of a data field.
source code
 
area_get_min_max(mask, col, row, width, height)
Finds minimum and maximum values in a rectangular part of a data field.
source code
 
area_get_min_max_mask(mask, mode, col, row, width, height)
Finds minimum and maximum values in a rectangular part of a data field.
source code
 
area_get_avg(mask, col, row, width, height)
Computes average value of a rectangular part of a data field.
source code
 
area_get_avg_mask(mask, mode, col, row, width, height)
Computes average value of a rectangular part of a data field.
source code
 
area_get_rms(mask, col, row, width, height)
Computes root mean square value of a rectangular part of a data field.
source code
 
area_get_rms_mask(mask, mode, col, row, width, height)
Computes root mean square value of deviations of a rectangular part of a data field.
source code
 
area_get_grainwise_rms(mask, mode, col, row, width, height)
Computes grain-wise root mean square value of deviations of a rectangular part of a data field.
source code
 
area_get_sum(mask, col, row, width, height)
Sums values of a rectangular part of a data field.
source code
 
area_get_sum_mask(mask, mode, col, row, width, height)
Sums values of a rectangular part of a data field.
source code
 
area_get_median(mask, col, row, width, height)
Computes median value of a data field area.
source code
 
area_get_median_mask(mask, mode, col, row, width, height)
Computes median value of a data field area.
source code
 
area_get_surface_area(mask, col, row, width, height)
Computes surface area of a rectangular part of a data field.
source code
 
area_get_surface_area_mask(mask, mode, col, row, width, height)
Computes surface area of a rectangular part of a data field.
source code
 
area_get_surface_slope_mask(mask, mode, col, row, width, height)
Computes root mean square surface slope (Sdq) of a rectangular part of a data field.
source code
 
area_get_mean_square(mask, mode, col, row, width, height)
Computes mean square value of a rectangular part of a data field.
source code
 
area_get_entropy_at_scales(target_line, mask, mode, col, row, width, height, maxdiv)
Calculates estimates of value distribution entropy at various scales.
source code
 
get_entropy_2d_at_scales(yfield, target_line, maxdiv)
Calculates estimates of entropy of two-dimensional point cloud at various scales.
source code
 
area_get_variation(mask, mode, col, row, width, height)
Computes the total variation of a rectangular part of a data field.
source code
 
area_get_entropy(mask, mode, col, row, width, height)
Estimates the entropy of field data distribution.
source code
 
area_get_volume(basis, mask, col, row, width, height)
Computes volume of a rectangular part of a data field.
source code
 
get_autorange()
Computes data field value range with outliers cut-off.
source code
 
get_stats()
Computes basic statistical quantities of a data field.
source code
 
area_get_stats(mask, col, row, width, height)
Computes basic statistical quantities of a rectangular part of a data field.
source code
 
area_get_stats_mask(mask, mode, col, row, width, height)
Computes basic statistical quantities of a rectangular part of a data field.
source code
 
area_count_in_range(mask, col, row, width, height, below, above)
Counts data samples in given range.
source code
 
area_dh(mask, target_line, col, row, width, height, nstats)
Calculates distribution of heights in a rectangular part of data field.
source code
 
dh(target_line, nstats)
Calculates distribution of heights in a data field.
source code
 
area_cdh(mask, target_line, col, row, width, height, nstats)
Calculates cumulative distribution of heights in a rectangular part of data field.
source code
 
cdh(target_line, nstats)
Calculates cumulative distribution of heights in a data field.
source code
 
area_da(target_line, col, row, width, height, orientation, nstats)
Calculates distribution of slopes in a rectangular part of data field.
source code
 
area_da_mask(mask, target_line, col, row, width, height, orientation, nstats)
Calculates distribution of slopes in a rectangular part of data field, with masking.
source code
 
da(target_line, orientation, nstats)
Calculates distribution of slopes in a data field.
source code
 
area_cda(target_line, col, row, width, height, orientation, nstats)
Calculates cumulative distribution of slopes in a rectangular part of data field.
source code
 
area_cda_mask(mask, target_line, col, row, width, height, orientation, nstats)
Calculates cumulative distribution of slopes in a rectangular part of data field, with masking.
source code
 
cda(target_line, orientation, nstats)
Calculates cumulative distribution of slopes in a data field.
source code
 
area_acf(target_line, col, row, width, height, orientation, interpolation, nstats)
Calculates one-dimensional autocorrelation function of a rectangular part of a data field.
source code
 
acf(target_line, orientation, interpolation, nstats)
Calculates one-dimensional autocorrelation function of a data field.
source code
 
area_row_acf(mask, masking, col, row, width, height, level, weights)
Calculates the row-wise autocorrelation function (ACF) of a field.
source code
 
area_hhcf(target_line, col, row, width, height, orientation, interpolation, nstats)
Calculates one-dimensional autocorrelation function of a rectangular part of a data field.
source code
 
hhcf(target_line, orientation, interpolation, nstats)
Calculates one-dimensional autocorrelation function of a data field.
source code
 
area_row_hhcf(mask, masking, col, row, width, height, level, weights)
Calculates the row-wise height-height correlation function (HHCF) of a rectangular part of a field.
source code
 
area_psdf(target_line, col, row, width, height, orientation, interpolation, windowing, nstats)
Calculates one-dimensional power spectrum density function of a rectangular part of a data field.
source code
 
psdf(target_line, orientation, interpolation, windowing, nstats)
Calculates one-dimensional power spectrum density function of a data field.
source code
 
area_row_psdf(mask, masking, col, row, width, height, windowing, level)
Calculates the row-wise power spectrum density function (PSDF) of a rectangular part of a field.
source code
 
area_rpsdf(target_line, col, row, width, height, interpolation, windowing, nstats)
Calculates radial power spectrum density function of a rectangular part of a data field.
source code
 
rpsdf(target_line, interpolation, windowing, nstats)
Calculates radial power spectrum density function of a data field.
source code
 
area_row_asg(mask, masking, col, row, width, height, level)
Calculates the row-wise area scale graph (ASG) of a rectangular part of a field.
source code
 
area_2dacf(target_field, col, row, width, height, xrange, yrange)
Calculates two-dimensional autocorrelation function of a data field area.
source code
 
area_2dacf_mask(target_field, mask, masking, col, row, width, height, xrange, yrange, weights)
Calculates two-dimensional autocorrelation function of a data field area.
source code
 
acf2d(target_field)
Calculates two-dimensional autocorrelation function of a data field.
source code
 
area_2dpsdf_mask(target_field, mask, masking, col, row, width, height, windowing, level)
Calculates two-dimensional power spectrum density function of a data field area.
source code
 
psdf2d(target_field, windowing, level)
Calculates two-dimensional power spectrum density function of a data field.
source code
 
area_racf(target_line, col, row, width, height, nstats)
Calculates radially averaged autocorrelation function of a rectangular part of a data field.
source code
 
racf(target_line, nstats)
Calculates radially averaged autocorrelation function of a data field.
source code
 
area_minkowski_volume(target_line, col, row, width, height, nstats)
Calculates Minkowski volume functional of a rectangular part of a data field.
source code
 
minkowski_volume(target_line, nstats)
Calculates Minkowski volume functional of a data field.
source code
 
area_minkowski_boundary(target_line, col, row, width, height, nstats)
Calculates Minkowski boundary functional of a rectangular part of a data field.
source code
 
minkowski_boundary(target_line, nstats)
Calculates Minkowski boundary functional of a data field.
source code
 
area_minkowski_euler(target_line, col, row, width, height, nstats)
Calculates Minkowski connectivity functional (Euler characteristics) of a rectangular part of a data field.
source code
 
minkowski_euler(target_line, nstats)
Calculates Minkowski connectivity functional (Euler characteristics) of a data field.
source code
 
area_get_dispersion(mask, masking, col, row, width, height)
Calculates the dispersion of a data field area, taking it as a distribution.
source code
 
get_dispersion()
Calculates the dispersion of a data field, taking it as a distribution.
source code
 
slope_distribution(derdist, kernel_size)
Computes angular slope distribution.
source code
 
get_normal_coeffs(normalize1)
Computes average normal vector of a data field.
source code
 
area_get_normal_coeffs(col, row, width, height, normalize1)
Computes average normal vector of an area of a data field.
source code
 
area_get_inclination(col, row, width, height)
Calculates the inclination of the image (polar and azimuth angle).
source code
 
get_inclination()
Calculates the inclination of the image (polar and azimuth angle).
source code
 
area_get_line_stats(mask, target_line, col, row, width, height, quantity, orientation)
Calculates a line quantity for each row or column in a data field area.
source code
 
get_line_stats_mask(mask, masking, target_line, weights, col, row, width, height, quantity, orientation)
Calculates a line quantity for each row or column in a data field area.
source code
 
get_line_stats(target_line, quantity, orientation)
Calculates a line quantity for each row or column of a data field.
source code
 
count_maxima()
Counts the number of regional maxima in a data field.
source code
 
count_minima()
Counts the number of regional minima in a data field.
source code
 
psdf_to_angular_spectrum(nstats)
Transforms 2D power spectral density to an angular spectrum.
source code
 
angular_average(target_line, mask, masking, x, y, r, nstats)
Performs angular averaging of a part of a data field.
source code
 
copy_units_to_surface(surface)
Sets lateral and value units of a surface to match a data field.
source code
 
get_data()
Extract the data of a data field.
source code
 
set_data(data)
Sets the entire contents of a data field.
source code
 
fit_polynom(col_degree, row_degree)
Fits a two-dimensional polynomial to a data field.
source code
 
area_fit_polynom(col, row, width, height, col_degree, row_degree)
Fits a two-dimensional polynomial to a rectangular part of a data field.
source code
 
subtract_polynom(col_degree, row_degree, coeffs)
Subtracts a two-dimensional polynomial from a data field.
source code
 
area_subtract_polynom(col, row, width, height, col_degree, row_degree, coeffs)
Subtracts a two-dimensional polynomial from a rectangular part of a data field.
source code
 
fit_legendre(col_degree, row_degree)
Fits two-dimensional Legendre polynomial to a data field.
source code
 
area_fit_legendre(col, row, width, height, col_degree, row_degree)
Fits two-dimensional Legendre polynomial to a rectangular part of a data field.
source code
 
subtract_legendre(col_degree, row_degree, coeffs)
Subtracts a two-dimensional Legendre polynomial fit from a data field.
source code
 
area_subtract_legendre(col, row, width, height, col_degree, row_degree, coeffs)
Subtracts a two-dimensional Legendre polynomial fit from a rectangular part of a data field.
source code
 
fit_poly_max(max_degree)
Fits two-dimensional polynomial with limited total degree to a data field.
source code
 
area_fit_poly_max(col, row, width, height, max_degree)
Fits two-dimensional polynomial with limited total degree to a rectangular part of a data field.
source code
 
subtract_poly_max(max_degree, coeffs)
Subtracts a two-dimensional polynomial with limited total degree from a data field.
source code
 
area_subtract_poly_max(col, row, width, height, max_degree, coeffs)
Subtracts a two-dimensional polynomial with limited total degree from a rectangular part of a data field.
source code
 
fit_poly(mask_field, term_powers, exclude)
Fit a given set of polynomial terms to a data field.
source code
 
area_fit_poly(mask_field, col, row, width, height, term_powers, exclude)
Fit a given set of polynomial terms to a rectangular part of a data field.
source code
 
subtract_poly(term_powers, coeffs)
Subtract a given set of polynomial terms from a data field.
source code
 
area_subtract_poly(col, row, width, height, term_powers, coeffs)
Subtract a given set of polynomial terms from a rectangular part of a data field.
source code
 
area_fit_local_planes(size, col, row, width, height, types)
Fits a plane through neighbourhood of each sample in a rectangular part of a data field.
source code
 
fit_local_planes(size, types)
Fits a plane through neighbourhood of each sample in a data field.
source code
 
elliptic_area_extract(col, row, width, height)
Extracts values from an elliptic region of a data field.
source code
 
elliptic_area_unextract(col, row, width, height, data)
Puts values back to an elliptic region of a data field.
source code
 
circular_area_extract(col, row, radius)
Extracts values from a circular region of a data field.
source code
 
circular_area_unextract(col, row, radius, data)
Puts values back to a circular region of a data field.
source code
 
circular_area_extract_with_pos(col, row, radius)
Extracts values with positions from a circular region of a data field.
source code
 
local_maximum(x, y, ax, ay)
Searches an elliptical area in a data field for local maximum.
source code
 
affine(dest, affine, interp, exterior, fill_value)
Performs an affine transformation of a data field in the horizontal plane.
source code
 
affine_prepare(dest, a1a2, a1a2_corr, scaling, prevent_rotation, oversampling)
Resolves an affine transformation of a data field in the horizontal plane.
source code
 
waterpour(result)
Performs the classical Vincent watershed segmentation of a data field.
source code
 
measure_lattice_acf(a1a2)
Estimates or improves estimate of lattice vectors from a 2D ACF field.
source code
 
measure_lattice_psdf(a1a2)
Estimates or improves estimate of lattice vectors from a 2D PSDF field.
source code
 
get_local_maxima_list(ndata, skip, threshold, subpixel)
Locates local maxima in a data field.
source code
 
get_profile_mask(mask, masking, xfrom, yfrom, xto, yto, res, thickness, interpolation)
Extracts a possibly averaged profile from data field, with masking.
source code
 
number_grains()
Constructs an array with grain numbers from a mask data field.
source code
 
number_grains_periodic()
Constructs an array with grain numbers from a mask data field treated as periodic.
source code
 
get_grain_sizes(grains)
Find sizes of all grains in a mask data field.
source code
 
get_grain_bounding_boxes(grains)
Finds bounding boxes of all grains in a mask data field.
source code
 
get_grain_bounding_boxes_periodic(grains)
Finds bounding boxes of all grains in a mask data field, assuming periodic boundary condition.
source code
 
get_grain_inscribed_boxes(grains)
Finds maximum-area inscribed boxes of all grains in a mask data field.
source code
 
grains_get_values(grains, quantity)
Finds a speficied quantity for all grains in a data field.
source code
 
grains_get_distribution(grain_field, grains, quantity, nstats)
Calculates the distribution of a speficied grain quantity.
source code
 
create_full_mask() source code
 
duplicate()
Convenience macro doing gwy_serializable_duplicate() with all the necessary typecasting.
source code
 
get_xmeasure()
Alias for DataField.get_dx().
source code
 
get_ymeasure()
Alias for DataField.get_dy().
source code
 
get_data_pointer()
Gets pointer to data which the data field contains.
source code
Method Details [hide private]

__init__(xres, yres, xreal, yreal, nullme)
(Constructor)

source code 

Creates a new data field.

Parameters:
  • xres - X-resolution, i.e., the number of columns. (int)
  • yres - Y-resolution, i.e., the number of rows. (int)
  • xreal - Real horizontal physical dimension. (float)
  • yreal - Real vertical physical dimension. (float)
  • nullme - Whether the data field should be initialized to zeroes. If False, the data will not be initialized. (bool)
Returns:
A newly created data field. (DataField)

sum_fields(operand1, operand2)

source code 

Sums two data fields.

Parameters:
  • operand1 - First data field operand. (DataField)
  • operand2 - Second data field operand. (DataField)

subtract_fields(operand1, operand2)

source code 

Subtracts one data field from another.

Parameters:
  • operand1 - First data field operand. (DataField)
  • operand2 - Second data field operand. (DataField)

divide_fields(operand1, operand2)

source code 

Divides one data field with another.

Parameters:
  • operand1 - First data field operand. (DataField)
  • operand2 - Second data field operand. (DataField)

multiply_fields(operand1, operand2)

source code 

Multiplies two data fields.

Parameters:
  • operand1 - First data field operand. (DataField)
  • operand2 - Second data field operand. (DataField)

min_of_fields(operand1, operand2)

source code 

Finds point-wise maxima of two data fields.

Parameters:
  • operand1 - First data field operand. (DataField)
  • operand2 - Second data field operand. (DataField)

max_of_fields(operand1, operand2)

source code 

Finds point-wise minima of two data fields.

Parameters:
  • operand1 - First data field operand. (DataField)
  • operand2 - Second data field operand. (DataField)

hypot_of_fields(operand1, operand2)

source code 

Finds point-wise hypotenuse of two data fields.

Parameters:
  • operand1 - First data field operand. (DataField)
  • operand2 - Second data field operand. (DataField)

Since: 2.31

linear_combination(coeff1, operand1, coeff2, operand2, constant)

source code 

Computes point-wise general linear combination of two data fields.

Parameters:
  • coeff1 - Factor to multiply the first operand with. (float)
  • operand1 - First data field operand. (DataField)
  • coeff2 - Factor to multiply the second operand with. (float)
  • operand2 - Second data field operand. (DataField)
  • constant - Constant term to add to the result. (float)

Since: 2.59

check_compatibility(data_field2, check)

source code 

Checks whether two data fields are compatible.

Parameters:
  • data_field2 - Another data field. (DataField)
  • check - The compatibility tests to perform. Expected values: DATA_COMPATIBILITY_RES, DATA_COMPATIBILITY_REAL, DATA_COMPATIBILITY_MEASURE, DATA_COMPATIBILITY_LATERAL, DATA_COMPATIBILITY_VALUE, DATA_COMPATIBILITY_AXISCAL, DATA_COMPATIBILITY_NCURVES, DATA_COMPATIBILITY_CURVELEN, DATA_COMPATIBILITY_ALL. (DataCompatibilityFlags)
Returns:
Zero if all tested properties are compatible. Flags corresponding to failed tests if data fields are not compatible. (DataCompatibilityFlags)

check_compatibility_with_brick_xy(brick, check)

source code 

Checks whether a data field is compatible with brick XY-planes.

Parameters:
  • brick - A three-dimensional data brick. (Brick)
  • check - The compatibility tests to perform. Expected values: DATA_COMPATIBILITY_RES, DATA_COMPATIBILITY_REAL, DATA_COMPATIBILITY_MEASURE, DATA_COMPATIBILITY_LATERAL, DATA_COMPATIBILITY_VALUE, DATA_COMPATIBILITY_AXISCAL, DATA_COMPATIBILITY_NCURVES, DATA_COMPATIBILITY_CURVELEN, DATA_COMPATIBILITY_ALL. (DataCompatibilityFlags)
Returns:
Zero if all tested properties are compatible. Flags corresponding to failed tests if the data objects are not compatible. (DataCompatibilityFlags)

Since: 2.51

extend(left, right, up, down, exterior, fill_value, keep_offsets)

source code 

Creates a new data field by extending another data field using the specified method of exterior handling.

Parameters:
  • left - Number of pixels to extend to the left (towards lower column indices). (int)
  • right - Number of pixels to extend to the right (towards higher column indices). (int)
  • up - Number of pixels to extend up (towards lower row indices). (int)
  • down - Number of pixels to extend down (towards higher row indices). (int)
  • exterior - Exterior pixels handling. Expected values: EXTERIOR_UNDEFINED, EXTERIOR_BORDER_EXTEND, EXTERIOR_MIRROR_EXTEND, EXTERIOR_PERIODIC, EXTERIOR_FIXED_VALUE, EXTERIOR_LAPLACE. (ExteriorType)
  • fill_value - The value to use with EXTERIOR_FIXED_VALUE exterior. (float)
  • keep_offsets - True to set the X and Y offsets of the new field using field offsets. False to set offsets of the new field to zeroes. (bool)
Returns:
A newly created data field. (DataField)

Since: 2.36

laplace_solve(mask, grain_id, qprec)

source code 

Replaces masked areas by the solution of Laplace equation.

The boundary conditions on mask boundaries are Dirichlet with values given by pixels on the outer boundary of the masked area. Boundary conditions at field edges are Neumann conditions ∂z/∂n=0 where n denotes the normal to the edge. If entire area of field is to be replaced the problem is underspecified; field will be filled with zeros.

For the default value of qprec the the result should be good enough for any image processing purposes with the typical local error of order 10⁻⁵ for very large grains and possibly much smaller for small grains. You can lower qprec down to about 0.3 or even 0.2 if speed is crucial and some precision can be sacrificed. Below that the result just starts becoming somewhat worse for not much speed increase. Conversely, you may wish to increase qprec up to 3 or even 5 if accuracy is important and you can afford the increased computation time.

Parameters:
  • mask - A two-dimensional data field containing mask defining the areas to interpolate. (DataField)
  • grain_id - The id number of the grain to replace with the solution of Laplace equation, from 1 to ngrains (see DataField.grain_numbers()). Passing 0 means to replace the entire empty space outside grains while passing a negative value means to replace the entire masked area. (int)
  • qprec - Speed-accuracy tuning parameter. Pass 1.0 for the default that is fast and sufficiently precise. (float)

Since: 2.47

correct_laplace_iteration(mask_field, buffer_field, corrfactor)

source code 

Performs one interation of Laplace data correction.

Tries to remove all the points in mask off the data by using iterative method similar to solving heat flux equation.

Use this function repeatedly until reasonable error is reached.

Parameters:
  • mask_field - Mask of places to be corrected. (DataField)
  • buffer_field - Initialized to same size as mask and data. (DataField)
  • corrfactor - Correction factor within step. (float)
Returns:
Value error. ((float))

Warning: For almost all purposes this function was superseded by non-iterative DataField.laplace_solve() which is simultaneously much faster and more accurate.

correct_average(mask_field)

source code 

Fills data under mask with the average value.

This function simply puts average value of all the data_field values (both masked and unmasked) into points in data_field lying under points where mask_field values are nonzero.

In most cases you probably want to use DataField.correct_average_unmasked() instead.

Parameters:
  • mask_field - Mask of places to be corrected. (DataField)

correct_average_unmasked(mask_field)

source code 

Fills data under mask with the average value of unmasked data.

This function calculates the average value of all unmasked pixels in data_field and then fills all the masked pixels with this average value. It is useful as the first rough step of correction of data under the mask.

If all data are masked the field is filled with zeroes.

Parameters:
  • mask_field - Mask of places to be corrected. (DataField)

Since: 2.44

mask_outliers(mask_field, thresh)

source code 

Creates mask of data that are above or below thresh*sigma from average height.

Sigma denotes root-mean square deviation of heights. This criterium corresponds to the usual Gaussian distribution outliers detection if thresh is 3.

Parameters:
  • mask_field - A data field to be filled with mask. (DataField)
  • thresh - Threshold value. (float)

mask_outliers2(mask_field, thresh_low, thresh_high)

source code 

Creates mask of data that are above or below multiples of rms from average height.

Data that are below mean-thresh_low*sigma or above mean+thresh_high*sigma are marked as outliers, where sigma denotes the root-mean square deviation of heights.

Parameters:
  • mask_field - A data field to be filled with mask. (DataField)
  • thresh_low - Lower threshold value. (float)
  • thresh_high - Upper threshold value. (float)

Since: 2.26

distort(dest, invtrans, user_data, interp, exterior, fill_value)

source code 

Distorts a data field in the horizontal plane.

Note the transform function invtrans is the inverse transform, in other words it calculates the old coordinates from the new coordinates (the transform would not be uniquely defined the other way round).

The EXTERIOR_LAPLACE exterior type cannot be used with this function.

Parameters:
  • dest - Destination data field. (DataField)
  • invtrans - Inverse transform function, that is the transformation from new coordinates to old coordinates. It gets (j+0.5, i+0.5), where i and j are the new row and column indices, passed as the input coordinates. The output coordinates should follow the same convention. Unless a special exterior handling is required, the transform function does not need to concern itself with coordinates being outside of the data. (CoordTransform2DFunc)
  • user_data - Pointer passed as user_data to invtrans. (gpointer)
  • interp - Interpolation type to use. Expected values: INTERPOLATION_NONE, INTERPOLATION_ROUND, INTERPOLATION_LINEAR, INTERPOLATION_BILINEAR, INTERPOLATION_KEY, INTERPOLATION_BSPLINE, INTERPOLATION_OMOMS, INTERPOLATION_NNA, INTERPOLATION_SCHAUM. (InterpolationType)
  • exterior - Exterior pixels handling. Expected values: EXTERIOR_UNDEFINED, EXTERIOR_BORDER_EXTEND, EXTERIOR_MIRROR_EXTEND, EXTERIOR_PERIODIC, EXTERIOR_FIXED_VALUE, EXTERIOR_LAPLACE. (ExteriorType)
  • fill_value - The value to use with EXTERIOR_FIXED_VALUE. (float)

Since: 2.5

sample_distorted(dest, coords, interp, exterior, fill_value)

source code 

Resamples a data field in an arbitrarily distorted manner.

Each item in coords corresponds to one pixel in dest and gives the coordinates in source defining the value to set in this pixel.

The EXTERIOR_LAPLACE exterior type cannot be used with this function.

Parameters:
  • dest - Destination data field. (DataField)
  • coords - Array of source coordinates with the same number of items as dest, ordered as data field data. See DataField.distort() for coordinate convention discussion. (const-XY*)
  • interp - Interpolation type to use. Expected values: INTERPOLATION_NONE, INTERPOLATION_ROUND, INTERPOLATION_LINEAR, INTERPOLATION_BILINEAR, INTERPOLATION_KEY, INTERPOLATION_BSPLINE, INTERPOLATION_OMOMS, INTERPOLATION_NNA, INTERPOLATION_SCHAUM. (InterpolationType)
  • exterior - Exterior pixels handling. Expected values: EXTERIOR_UNDEFINED, EXTERIOR_BORDER_EXTEND, EXTERIOR_MIRROR_EXTEND, EXTERIOR_PERIODIC, EXTERIOR_FIXED_VALUE, EXTERIOR_LAPLACE. (ExteriorType)
  • fill_value - The value to use with EXTERIOR_FIXED_VALUE. (float)

Since: 2.45

mark_scars(result, threshold_high, threshold_low, min_scar_len, max_scar_width, negative)

source code 

Find and marks scars in a data field.

Scars are linear horizontal defects, consisting of shifted values. Zero or negative values in result siginify normal data, positive values siginify samples that are part of a scar.

Parameters:
  • result - A data field to store the result to (it is resized to match data_field). (DataField)
  • threshold_high - Miminum relative step for scar marking, must be positive. (float)
  • threshold_low - Definite relative step for scar marking, must be at least equal to threshold_high. (float)
  • min_scar_len - Minimum length of a scar, shorter ones are discarded (must be at least one). (float)
  • max_scar_width - Maximum width of a scar, must be at least one. (float)
  • negative - True to detect negative scars, False to positive. (bool)

Since: 2.46

subtract_row_shifts(shifts)

source code 

Shifts entire data field rows as specified by given data line.

Data line shifts must have resolution corresponding to the number of data_field rows. Its values are subtracted from individual field rows.

Parameters:
  • shifts - Data line containing the row shifts. (DataLine)

Since: 2.52

find_row_shifts_trimmed_mean(mask, masking, trimfrac, mincount)

source code 

Finds row shifts to misaligned row correction using trimmed row means.

For zero trimfrac the function calculates row means. For trimfrac of 1/2 or larger it calculates row medians. Values between correspond to trimmed means.

Parameters:
  • mask - Mask of values to take values into account/exclude, or None for full data_field. (DataField)
  • masking - Masking mode to use. See the introduction for description of masking modes. Expected values: MASK_EXCLUDE, MASK_INCLUDE, MASK_IGNORE. (MaskingType)
  • trimfrac - Fraction of lowest values and highest values to discard when trimming. (float)
  • mincount - Minimum number of values in a row necessary for per-row calculation. Rows which are essentially completely masked are not shifted with respect to a global value. Pass a non-positive number to use an automatic minimum count. (int)
Returns:
A newly created data line containing the row shifts, for instance row means, medians or trimmed means. (DataLine)

Since: 2.52

find_row_shifts_trimmed_diff(mask, masking, trimfrac, mincount)

source code 

Finds row shifts to misaligned row correction using trimmed means of row differences.

For zero trimfrac the function calculates row means. For trimfrac of 1/2 or larger it calculates row medians. Values between correspond to trimmed means.

Parameters:
  • mask - Mask of values to take values into account/exclude, or None for full data_field. (DataField)
  • masking - Masking mode to use. See the introduction for description of masking modes. Expected values: MASK_EXCLUDE, MASK_INCLUDE, MASK_IGNORE. (MaskingType)
  • trimfrac - Fraction of lowest values and highest values to discard when trimming. (float)
  • mincount - Minimum number of values in a row necessary for per-row calculation. Rows which are essentially completely masked are not shifted with respect to a global value. Pass a non-positive number to use an automatic minimum count. (int)
Returns:
A newly created data line containing the row shifts, for instance row means, medians or trimmed means. (DataLine)

Since: 2.52

get_correlation_score(kernel_field, col, row, kernel_col, kernel_row, kernel_width, kernel_height)

source code 

Calculates a correlation score in one point.

Correlation window size is given by kernel_col, kernel_row, kernel_width, kernel_height, postion of the correlation window on data is given by col, row.

If anything fails (data too close to boundary, etc.), function returns -1.0 (none correlation)..

Parameters:
  • kernel_field - Kernel to correlate data field with. (DataField)
  • col - Upper-left column position in the data field. (int)
  • row - Upper-left row position in the data field. (int)
  • kernel_col - Upper-left column position in kernel field. (int)
  • kernel_row - Upper-left row position in kernel field. (int)
  • kernel_width - Width of kernel field area. (int)
  • kernel_height - Heigh of kernel field area. (int)
Returns:
Correlation score (between -1.0 and 1.0). Value 1.0 denotes maximum correlation, -1.0 none correlation. (float)

get_weighted_correlation_score(kernel_field, weight_field, col, row, kernel_col, kernel_row, kernel_width, kernel_height)

source code 

Calculates a correlation score in one point using weights to center the used information to the center of kernel.

Correlation window size is given by kernel_col, kernel_row, kernel_width, kernel_height, postion of the correlation window on data is given by col, row.

If anything fails (data too close to boundary, etc.), function returns -1.0 (none correlation)..

Parameters:
  • kernel_field - Kernel to correlate data field with. (DataField)
  • weight_field - data field of same size as kernel window size (DataField)
  • col - Upper-left column position in the data field. (int)
  • row - Upper-left row position in the data field. (int)
  • kernel_col - Upper-left column position in kernel field. (int)
  • kernel_row - Upper-left row position in kernel field. (int)
  • kernel_width - Width of kernel field area. (int)
  • kernel_height - Heigh of kernel field area. (int)
Returns:
Correlation score (between -1.0 and 1.0). Value 1.0 denotes maximum correlation, -1.0 none correlation. (float)

crosscorrelate(data_field2, x_dist, y_dist, score, search_width, search_height, window_width, window_height)

source code 

Algorithm for matching two different images of the same object under changes.

It does not use any special features for matching. It simply searches for all points (with their neighbourhood) of data_field1 within data_field2. Parameters search_width and search_height determine maimum area where to search for points. The area is cenetered in the data_field2 at former position of points at data_field1.

Parameters:
  • data_field2 - A data field. (DataField)
  • x_dist - A data field to store x-distances to. (DataField)
  • y_dist - A data field to store y-distances to. (DataField)
  • score - Data field to store correlation scores to. (DataField)
  • search_width - Search area width. (int)
  • search_height - Search area height. (int)
  • window_width - Correlation window width. This parameter is not actually used. Pass zero. (int)
  • window_height - Correlation window height. This parameter is not actually used. Pass zero. (int)

crosscorrelate_init(data_field2, x_dist, y_dist, score, search_width, search_height, window_width, window_height)

source code 

Initializes a cross-correlation iterator.

This iterator reports its state as ComputationStateType.

Parameters:
  • data_field2 - A data field. (DataField)
  • x_dist - A data field to store x-distances to, or None. (DataField)
  • y_dist - A data field to store y-distances to, or None. (DataField)
  • score - Data field to store correlation scores to, or None. (DataField)
  • search_width - Search area width. (int)
  • search_height - Search area height. (int)
  • window_width - Correlation window width. (int)
  • window_height - Correlation window height. (int)
Returns:
A new cross-correlation iterator. (ComputationState*)

correlate(kernel_field, score, method)

source code 

Computes correlation score for all positions in a data field.

Correlation score is compute for all points in data field data_field and full size of correlation kernel kernel_field.

The points in score correspond to centers of kernel. More precisely, the point ((kxres-1)/2, (kyres-1)/2) in score corresponds to kernel field top left corner coincident with data field top left corner. Points outside the area where the kernel field fits into the data field completely are set to -1 for CORRELATION_NORMAL.

This function is mostly made obsolete by DataField.correlation_search() which offers, beside the plain FFT-based correlation, a method equivalent to CORRELATION_NORMAL as well as several others, all computed efficiently using FFT.

Parameters:
  • kernel_field - Correlation kernel. (DataField)
  • score - Data field to store correlation scores to. (DataField)
  • method - Correlation score calculation method. Expected values: CORRELATION_NORMAL, CORRELATION_FFT, CORRELATION_POC. (CorrelationType)

correlate_init(kernel_field, score)

source code 

Creates a new correlation iterator.

This iterator reports its state as ComputationStateType.

This function is mostly made obsolete by DataField.correlation_search() which offers, beside the plain FFT-based correlation, a method equivalent to CORRELATION_NORMAL as well as several others, all computed efficiently using FFT.

Parameters:
  • kernel_field - Kernel to correlate data field with. (DataField)
  • score - Data field to store correlation scores to. (DataField)
Returns:
A new correlation iterator. (ComputationState*)

correlation_search(kernel, kernel_weight, target, method, regcoeff, exterior, fill_value)

source code 

Performs correlation search of a detail in a larger data field.

There are two basic classes of methods: Covariance (products of kernel and data values are summed) and height difference (squared differences between kernel and data values are summed). For the second class, the sign of the output is inverted. So in both cases higher values mean better match. All methods are implemented efficiently using FFT.

Usually you want to use CORR_SEARCH_COVARIANCE or CORR_SEARCH_HEIGHT_DIFF, in which the absolute data offsets play no role (only the differences).

If the detail can also occur with different height scales, use CORR_SEARCH_COVARIANCE_SCORE or CORR_SEARCH_HEIGHT_DIFF_SCORE in which the local data variance is normalised. In this case dfield regions with very small (or zero) variance can lead to odd results and spurious maxima. Use regcoeff to suppress them: Score of image details is suppressed if their variance is regcoeff times the mean local variance.

If kernel_weight is non-None is allows specify masking/weighting of kernel. The simplest use is masking when searching for a non-rectangular detail. Fill kernel_weight with 1s for important kernel pixels and with 0s for irrelevant pixels. However, you can use arbitrary non-negative weights.

Parameters:
  • kernel - Detail to find (kernel). (DataField)
  • kernel_weight - Kernel weight, or None. If given, its dimensions must match kernel. (DataField)
  • target - Data field to fill with the score. It will be resampled to match dfield. (DataField)
  • method - Method, determining the type of output to put into target. Expected values: CORR_SEARCH_COVARIANCE_RAW, CORR_SEARCH_COVARIANCE, CORR_SEARCH_COVARIANCE_SCORE, CORR_SEARCH_HEIGHT_DIFF_RAW, CORR_SEARCH_HEIGHT_DIFF, CORR_SEARCH_HEIGHT_DIFF_SCORE. (CorrSearchType)
  • regcoeff - Regularisation coefficient, any positive number. Pass something like 0.1 if unsure. You can also pass zero, it means the same as glib.MINDOUBLE. (float)
  • exterior - Exterior pixels handling. Expected values: EXTERIOR_UNDEFINED, EXTERIOR_BORDER_EXTEND, EXTERIOR_MIRROR_EXTEND, EXTERIOR_PERIODIC, EXTERIOR_FIXED_VALUE, EXTERIOR_LAPLACE. (ExteriorType)
  • fill_value - The value to use with EXTERIOR_FIXED_VALUE exterior. (float)

Since: 2.50

invalidate()

source code 

Invalidates cached data field stats.

User code should rarely need this macro, as all DataField methods do proper invalidation when they change data, as well as DataField.get_data() does.

However, if you get raw data with DataField.get_data() and then mix direct changes to it with calls to methods like DataField.get_max(), you may need to explicitely invalidate cached values to let DataField.get_max() know it has to recompute the maximum.

new_alike(nullme)

source code 

Creates a new data field similar to an existing one.

Use DataField.duplicate() if you want to copy a data field including data.

Parameters:
  • nullme - Whether the data field should be initialized to zeroes. If False, the data will not be initialized. (bool)
Returns:
A newly created data field. (DataField)

new_resampled(xres, yres, interpolation)

source code 

Creates a new data field by resampling an existing one.

This method is equivalent to DataField.duplicate() followed by DataField.resample(), but it is more efficient.

Parameters:
  • xres - Desired X resolution. (int)
  • yres - Desired Y resolution. (int)
  • interpolation - Interpolation method to use. Expected values: INTERPOLATION_NONE, INTERPOLATION_ROUND, INTERPOLATION_LINEAR, INTERPOLATION_BILINEAR, INTERPOLATION_KEY, INTERPOLATION_BSPLINE, INTERPOLATION_OMOMS, INTERPOLATION_NNA, INTERPOLATION_SCHAUM. (InterpolationType)
Returns:
A newly created data field. (DataField)

resample(xres, yres, interpolation)

source code 

Resamples a data field using given interpolation method

This method may invalidate raw data buffer returned by DataField.get_data().

Parameters:
  • xres - Desired X resolution. (int)
  • yres - Desired Y resolution. (int)
  • interpolation - Interpolation method to use. Expected values: INTERPOLATION_NONE, INTERPOLATION_ROUND, INTERPOLATION_LINEAR, INTERPOLATION_BILINEAR, INTERPOLATION_KEY, INTERPOLATION_BSPLINE, INTERPOLATION_OMOMS, INTERPOLATION_NNA, INTERPOLATION_SCHAUM. (InterpolationType)

bin(target, binw, binh, xoff, yoff, trimlowest, trimhighest)

source code 

Bins a data field into another data field.

See DataField.new_binned() for a detailed description.

Parameters:
  • target - Target data field. It will be resized as necessary. (DataField)
  • binw - Bin height (in pixels). (int)
  • binh - Bin width (in pixels). (int)
  • xoff - Horizontal offset of bins (in pixels). (int)
  • yoff - Vertical offset of bins (in pixels). (int)
  • trimlowest - Number of lowest values to discard. (int)
  • trimhighest - Number of highest values to discard. (int)

Since: 2.55

new_binned(binw, binh, xoff, yoff, trimlowest, trimhighest)

source code 

Creates a new data field by binning an existing one.

The data field is divided into rectangles of dimensions binw×binh, offset by (xoff, yoff). The values in each complete rectangle are averaged and the average becomes the pixel value in the newly created, smaller data field.

Note that the result is the average – not sum – of the individual values. Multiply the returned data field with binw×binh if you want sum.

By giving non-zero trimlowest and trimhighest you can change the plain average to a trimmed one (even turning it to median in the extreme case). It must always hold that trimlowest + trimhighest is smaller than binw×binh.

Parameters:
  • binw - Bin height (in pixels). (int)
  • binh - Bin width (in pixels). (int)
  • xoff - Horizontal offset of bins (in pixels). (int)
  • yoff - Vertical offset of bins (in pixels). (int)
  • trimlowest - Number of lowest values to discard. (int)
  • trimhighest - Number of highest values to discard. (int)
Returns:
A newly created data field. (DataField)

Since: 2.50

resize(ulcol, ulrow, brcol, brrow)

source code 

Resizes (crops) a data field.

Crops a data field to a rectangle between upper-left and bottom-right points, recomputing real size.

This method may invalidate raw data buffer returned by DataField.get_data().

Parameters:
  • ulcol - Upper-left column coordinate. (int)
  • ulrow - Upper-left row coordinate. (int)
  • brcol - Bottom-right column coordinate + 1. (int)
  • brrow - Bottom-right row coordinate + 1. (int)

area_extract(col, row, width, height)

source code 

Extracts a rectangular part of a data field to a new data field.

Parameters:
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
Returns:
The extracted area as a newly created data field. (DataField)

copy(dest, nondata_too)

source code 

Copies the contents of an already allocated data field to a data field of the same size.

Parameters:
  • dest - Destination data field. (DataField)
  • nondata_too - Whether non-data (units) should be copied too. (bool)

area_copy(dest, col, row, width, height, destcol, destrow)

source code 

Copies a rectangular area from one data field to another.

The area starts at (col, row) in src and its dimension is width*height. It is copied to dest starting from (destcol, destrow).

The source area has to be completely contained in src. No assumptions are made about destination position, however, parts of the source area sticking out the destination data field dest are cut off.

If src is equal to dest, the areas may not overlap.

Parameters:
  • dest - Destination data field. (DataField)
  • col - Area upper-left column coordinate in src. (int)
  • row - Area upper-left row coordinate src. (int)
  • width - Area width (number of columns), pass -1 for full src widdth. (int)
  • height - Area height (number of rows), pass -1 for full src height. (int)
  • destcol - Destination column in dest. (int)
  • destrow - Destination row in dest. (int)

get_xres()

source code 

Gets X resolution (number of columns) of a data field.

Returns:
X resolution. (int)

get_yres()

source code 

Gets Y resolution (number of rows) of the field.

Returns:
Y resolution. (int)

get_xreal()

source code 

Gets the X real (physical) size of a data field.

Returns:
X real size value. (float)

get_yreal()

source code 

Gets the Y real (physical) size of a data field.

Returns:
Y real size value. (float)

set_xreal(xreal)

source code 

Sets X real (physical) size value of a data field.

Parameters:
  • xreal - New X real size value. (float)

set_yreal(yreal)

source code 

Sets Y real (physical) size value of a data field.

Parameters:
  • yreal - New Y real size value. (float)

get_dx()

source code 

Gets the horizontal pixel size of a data field in real units.

The result is the same as DataField.get_xreal(data_field)/DataField.get_xres(data_field).

Returns:
Horizontal pixel size. (float)

Since: 2.52

get_dy()

source code 

Gets the vertical pixel size of a data field in real units.

The result is the same as DataField.get_yreal(data_field)/DataField.get_yres(data_field).

Returns:
Vertical pixel size. (float)

Since: 2.52

get_xoffset()

source code 

Gets the X offset of data field origin.

Returns:
X offset value. (float)

get_yoffset()

source code 

Gets the Y offset of data field origin.

Returns:
Y offset value. (float)

set_xoffset(xoff)

source code 

Sets the X offset of a data field origin.

Note offsets don't affect any calculation, nor functions like DataField.rtoj().

Parameters:
  • xoff - New X offset value. (float)

set_yoffset(yoff)

source code 

Sets the Y offset of a data field origin.

Note offsets don't affect any calculation, nor functions like DataField.rtoi().

Parameters:
  • yoff - New Y offset value. (float)

get_si_unit_xy()

source code 

Returns lateral SI unit of a data field.

Returns:
SI unit corresponding to the lateral (XY) dimensions of the data field. Its reference count is not incremented. (SIUnit)

get_si_unit_z()

source code 

Returns value SI unit of a data field.

Returns:
SI unit corresponding to the "height" (Z) dimension of the data field. Its reference count is not incremented. (SIUnit)

set_si_unit_xy(si_unit)

source code 

Sets the SI unit corresponding to the lateral (XY) dimensions of a data field.

It does not assume a reference on si_unit, instead it adds its own reference.

Parameters:
  • si_unit - SI unit to be set. (SIUnit)

set_si_unit_z(si_unit)

source code 

Sets the SI unit corresponding to the "height" (Z) dimension of a data field.

It does not assume a reference on si_unit, instead it adds its own reference.

Parameters:
  • si_unit - SI unit to be set. (SIUnit)

get_value_format_xy(style)

source code 

Finds value format good for displaying coordinates of a data field.

Parameters:
  • style - Unit format style. Expected values: SI_UNIT_FORMAT_NONE, SI_UNIT_FORMAT_PLAIN, SI_UNIT_FORMAT_MARKUP, SI_UNIT_FORMAT_VFMARKUP, SI_UNIT_FORMAT_TEX, SI_UNIT_FORMAT_VFTEX, SI_UNIT_FORMAT_UNICODE, SI_UNIT_FORMAT_VFUNICODE. (SIUnitFormatStyle)
Returns:
Tuple consisting of 2 values (value, format). ((SIValueFormat), (SkipArg))

get_value_format_z(style)

source code 

Finds value format good for displaying values of a data field.

Parameters:
  • style - Unit format style. Expected values: SI_UNIT_FORMAT_NONE, SI_UNIT_FORMAT_PLAIN, SI_UNIT_FORMAT_MARKUP, SI_UNIT_FORMAT_VFMARKUP, SI_UNIT_FORMAT_TEX, SI_UNIT_FORMAT_VFTEX, SI_UNIT_FORMAT_UNICODE, SI_UNIT_FORMAT_VFUNICODE. (SIUnitFormatStyle)
Returns:
Tuple consisting of 2 values (value, format). ((SIValueFormat), (SkipArg))

copy_units(target)

source code 

Sets lateral and value units of a data field to match another data field.

Parameters:

Since: 2.49

copy_units_to_data_line(data_line)

source code 

Sets lateral and value units of a data line to match a data field.

Parameters:
  • data_line - A data line to set units of. (DataLine)

itor(row)

source code 

Transforms vertical pixel coordinate to real (physical) Y coordinate.

That is it maps range [0..y-resolution] to range [0..real-y-size]. It is not suitable for conversion of matrix indices to physical coordinates, you have to use DataField.itor(data_field, row + 0.5) for that.

Parameters:
  • row - Vertical pixel coordinate. (float)
Returns:
Real Y coordinate. (float)

jtor(col)

source code 

Transforms horizontal pixel coordinate to real (physical) X coordinate.

That is it maps range [0..x-resolution] to range [0..real-x-size]. It is not suitable for conversion of matrix indices to physical coordinates, you have to use DataField.jtor(data_field, col + 0.5) for that.

Parameters:
  • col - Horizontal pixel coordinate. (float)
Returns:
Real X coordinate. (float)

rtoi(realy)

source code 

Transforms real (physical) Y coordinate to row.

That is it maps range [0..real-y-size] to range [0..y-resolution].

Parameters:
  • realy - Real (physical) Y coordinate. (float)
Returns:
Vertical pixel coodinate. (float)

rtoj(realx)

source code 

Transforms real (physical) X coordinate to column.

That is it maps range [0..real-x-size] to range [0..x-resolution].

Parameters:
  • realx - Real (physical) X coodinate. (float)
Returns:
Horizontal pixel coordinate. (float)

get_val(col, row)

source code 

Gets value at given position in a data field.

Do not access data with this function inside inner loops, it's slow. Get the raw data buffer with DataField.get_data_const() and access it directly instead.

Parameters:
  • col - Column index. (int)
  • row - Row index. (int)
Returns:
Value at (col, row). (float)

set_val(col, row, value)

source code 

Sets value at given position in a data field.

Do not set data with this function inside inner loops, it's slow. Get the raw data buffer with DataField.get_data() and write to it directly instead.

Parameters:
  • col - Column index. (int)
  • row - Row index. (int)
  • value - Value to set. (float)

get_dval(x, y, interpolation)

source code 

Gets interpolated value at arbitrary data field point indexed by pixel coordinates.

Note pixel values are centered in pixels, so to get the same value as DataField.get_val(data_field, j, i) returns, it's necessary to add 0.5: DataField.get_dval(data_field, j+0.5, i+0.5, interpolation).

See also DataField.get_dval_real() that does the same, but takes real coordinates.

Parameters:
  • x - Horizontal position in pixel units, in range [0, x-resolution]. (float)
  • y - Vertical postition in pixel units, in range [0, y-resolution]. (float)
  • interpolation - Interpolation method to be used. Expected values: INTERPOLATION_NONE, INTERPOLATION_ROUND, INTERPOLATION_LINEAR, INTERPOLATION_BILINEAR, INTERPOLATION_KEY, INTERPOLATION_BSPLINE, INTERPOLATION_OMOMS, INTERPOLATION_NNA, INTERPOLATION_SCHAUM. (InterpolationType)
Returns:
Interpolated value at position (x,y). (float)

get_dval_real(x, y, interpolation)

source code 

Gets interpolated value at arbitrary data field point indexed by real coordinates.

See also DataField.get_dval() that does the same, but takes pixel coordinates.

Parameters:
  • x - X postion in real coordinates. (float)
  • y - Y postition in real coordinates. (float)
  • interpolation - Interpolation method to use. Expected values: INTERPOLATION_NONE, INTERPOLATION_ROUND, INTERPOLATION_LINEAR, INTERPOLATION_BILINEAR, INTERPOLATION_KEY, INTERPOLATION_BSPLINE, INTERPOLATION_OMOMS, INTERPOLATION_NNA, INTERPOLATION_SCHAUM. (InterpolationType)
Returns:
Value at position (x,y). (float)

rotate(angle, interpolation)

source code 

Rotates a data field by a given angle.

This function is mostly obsolete. See DataField.new_rotated() and DataField.new_rotated_90().

Values that get outside of data field by the rotation are lost. Undefined values from outside of data field that get inside are set to data field minimum value.

The rotation is performed in pixel space, i.e. it can be in fact a more general affine transform in the real coordinates when pixels are not square.

Parameters:
  • angle - Rotation angle (in radians). (float)
  • interpolation - Interpolation method to use. Expected values: INTERPOLATION_NONE, INTERPOLATION_ROUND, INTERPOLATION_LINEAR, INTERPOLATION_BILINEAR, INTERPOLATION_KEY, INTERPOLATION_BSPLINE, INTERPOLATION_OMOMS, INTERPOLATION_NNA, INTERPOLATION_SCHAUM. (InterpolationType)

new_rotated(exterior_mask, angle, interp, resize)

source code 

Creates a new data field by rotating a data field by an atribtrary angle.

The returned data field can have pixel corresponding to exterior in dfield (unless resize is ROTATE_RESIZE_CUT). They are filled with a neutral value; pass exterior_mask and replace them as you wish if you need more control.

The rotation is performed in real space, i.e. it is a more general affine transform in the pixel space for data field with non-square pixels. See DataField.rotate() which rotates in the pixel space.

The returned data field has always square pixels. If you want to rotate by a multiple of glib.PI/2 while preserving non-square pixels, you must use explicitly a function such as DataField.new_rotated_90().

Parameters:
  • exterior_mask - Optional data field where pixels corresponding to exterior will be set to 1. It will be resized to match the returned field. (DataField)
  • angle - Rotation angle (in radians). (float)
  • interp - Interpolation type to use. Expected values: INTERPOLATION_NONE, INTERPOLATION_ROUND, INTERPOLATION_LINEAR, INTERPOLATION_BILINEAR, INTERPOLATION_KEY, INTERPOLATION_BSPLINE, INTERPOLATION_OMOMS, INTERPOLATION_NNA, INTERPOLATION_SCHAUM. (InterpolationType)
  • resize - Controls how the result size is determined. Expected values: ROTATE_RESIZE_SAME_SIZE, ROTATE_RESIZE_EXPAND, ROTATE_RESIZE_CUT. (RotateResizeType)
Returns:
A newly created data field. (DataField)

Since: 2.46

new_rotated_90(clockwise)

source code 

Creates a new data field by rotating a data field by 90 degrees.

Parameters:
  • clockwise - True to rotate clocwise, False to rotate anti-clockwise. (bool)
Returns:
A newly created data field. (DataField)

Since: 2.46

invert(x, y, z)

source code 

Reflects and/or inverts a data field.

In the case of value reflection, it's inverted about the mean value.

Note that the axis parameter convention is confusing and different from Brick.invert() and DataLine.invert(). Parameters x an y correspond the axes around which to flip (which themselves stay unchanged). You may need to swap x and y arguments compared what you would pass naturally.

Parameters:
  • x - True to reflect Y, i.e. rows within the XY plane. (bool)
  • y - True to reflect X, i.e. columns within the XY plane. (bool)
  • z - True to invert values. (bool)

flip_xy(dest, minor)

source code 

Copies data from one data field to another with transposition.

The destination data field is resized as necessary, its real dimensions set to transposed src dimensions and its offsets are reset. Units are not updated.

Parameters:
  • dest - Destination data field. (DataField)
  • minor - True to mirror about the minor diagonal; False to mirror about major diagonal. (bool)

Since: 2.49

area_flip_xy(col, row, width, height, dest, minor)

source code 

Copies data from a rectangular part of one data field to another with transposition.

The destination data field is resized as necessary, its real dimensions set to transposed src area dimensions and its offsets are reset. Units are not updated.

Parameters:
  • col - Upper-left column coordinate in src. (int)
  • row - Upper-left row coordinate in src. (int)
  • width - Area width (number of columns) in src. (int)
  • height - Area height (number of rows) in src. (int)
  • dest - Destination data field. (DataField)
  • minor - True to mirror about the minor diagonal; False to mirror about major diagonal. (bool)

Since: 2.49

fill(value)

source code 

Fills a data field with given value.

Parameters:
  • value - Value to be entered. (float)

multiply(value)

source code 

Multiplies all values in a data field by given value.

Parameters:
  • value - Value to multiply data_field with. (float)

add(value)

source code 

Adds given value to all values in a data field.

Parameters:
  • value - Value to be added to data field values. (float)

abs()

source code 

Takes absolute value of all values in a data field.

Since: 2.52

area_fill(col, row, width, height, value)

source code 

Fills a rectangular part of a data field with given value.

Parameters:
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • value - Value to be entered (float)

area_fill_mask(mask, mode, col, row, width, height, value)

source code 

Fills a masked rectangular part of a data field with given value.

Parameters:
  • mask - Mask specifying which values to take into account/exclude, or None. (DataField)
  • mode - Masking mode to use. See the introduction for description of masking modes. Expected values: MASK_EXCLUDE, MASK_INCLUDE, MASK_IGNORE. (MaskingType)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • value - Value to be entered (float)

Since: 2.44

area_clear(col, row, width, height)

source code 

Fills a rectangular part of a data field with zeroes.

Parameters:
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)

area_multiply(col, row, width, height, value)

source code 

Multiplies values in a rectangular part of a data field by given value

Parameters:
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • value - Value to multiply area with. (float)

area_add(col, row, width, height, value)

source code 

Adds given value to all values in a rectangular part of a data field.

Parameters:
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • value - Value to be added to area values. (float)

area_abs(col, row, width, height)

source code 

Takes absolute value of values in a rectangular part of a data field.

Parameters:
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)

Since: 2.52

get_profile(scol, srow, ecol, erow, res, thickness, interpolation)

source code 

Extracts a possibly averaged profile from data field to a data line.

Parameters:
  • scol - The column the line starts at (inclusive). (int)
  • srow - The row the line starts at (inclusive). (int)
  • ecol - The column the line ends at (inclusive). (int)
  • erow - The row the line ends at (inclusive). (int)
  • res - Requested resolution of data line (the number of samples to take). If nonpositive, data line resolution is chosen to match data_field's. (int)
  • thickness - Thickness of line to be averaged. (int)
  • interpolation - Interpolation type to use. Expected values: INTERPOLATION_NONE, INTERPOLATION_ROUND, INTERPOLATION_LINEAR, INTERPOLATION_BILINEAR, INTERPOLATION_KEY, INTERPOLATION_BSPLINE, INTERPOLATION_OMOMS, INTERPOLATION_NNA, INTERPOLATION_SCHAUM. (InterpolationType)
Returns:
Tuple consisting of 2 values (value, data_line). ((DataLine), (SkipArg))

get_row(data_line, row)

source code 

Extracts a data field row into a data line.

Parameters:
  • data_line - A data line. It will be resized to width ot data_field. (DataLine)
  • row - Row index. (int)

get_column(data_line, col)

source code 

Extracts a data field column into a data line.

Parameters:
  • data_line - A data line. It will be resized to height of data_field. (DataLine)
  • col - Column index. (int)

set_row(data_line, row)

source code 

Sets a row in the data field to values of a data line.

Data line length must be equal to width of data field.

Parameters:
  • data_line - A data line. (DataLine)
  • row - Row index. (int)

set_column(data_line, col)

source code 

Sets a column in the data field to values of a data line.

Data line length must be equal to height of data field.

Parameters:
  • data_line - A data line. (DataLine)
  • col - Column index. (int)

get_row_part(data_line, row, from_, to)

source code 

Extracts part of a data field row into a data line.

Parameters:
  • data_line - A data line. It will be resized to the row part width. (DataLine)
  • row - Row index. (int)
  • from_ - (int)
  • to - End column index + 1. (int)

get_column_part(data_line, col, from_, to)

source code 

Extracts part of a data field column into a data line.

Parameters:
  • data_line - A data line. It will be resized to the column part height. (DataLine)
  • col - Column index. (int)
  • from_ - (int)
  • to - End row index + 1. (int)

set_row_part(data_line, row, from_, to)

source code 

Puts a data line into a data field row.

If data line length differs from to-from, it is resampled to this length.

Parameters:
  • data_line - A data line. (DataLine)
  • row - Row index. (int)
  • from_ - (int)
  • to - End row index + 1. (int)

set_column_part(data_line, col, from_, to)

source code 

Puts a data line into data field column.

If data line length differs from to-from, it is resampled to this length.

Parameters:
  • data_line - A data line. (DataLine)
  • col - Column index. (int)
  • from_ - (int)
  • to - End row index + 1. (int)

get_xder(col, row)

source code 

Computes central derivative in X direction.

On border points, one-side derivative is returned.

Parameters:
  • col - Column index. (int)
  • row - Row index. (int)
Returns:
Derivative in X direction. (float)

get_yder(col, row)

source code 

Computes central derivative in Y direction.

On border points, one-side derivative is returned.

Note the derivative is for legacy reasons calulcated for the opposite y direction than is usual elsewhere in Gwyddion, i.e. if values increase with increasing row number, the returned value is negative.

Parameters:
  • col - Column index. (int)
  • row - Row index. (int)
Returns:
Derivative in Y direction (float)

get_angder(col, row, theta)

source code 

Computes derivative in direction specified by given angle.

Parameters:
  • col - Column index. (int)
  • row - Row index. (int)
  • theta - Angle defining the direction (in radians, counterclockwise). (float)
Returns:
Derivative in direction given by angle theta. (float)

average_xyz(density_map, points, npoints)

source code 

Fills a data field with regularised XYZ data using a simple method.

The real dimensions and offsets of field determine the rectangle in the XY plane that will be regularised. The regularisation method is fast but simple and there are no absolute guarantees of quality, even though the result will be usually quite acceptable.

This especially applies to reasonable views of the XYZ data. Unreasonable views can be rendered unreasonably. In particular if the rectangle does not contain any point from points (either due to high zoom to an empty region or by just being completely off) data_field will be filled entirely with the value of the closest point or something similar.

Parameters:
  • density_map - Optional data field to fill with XYZ point density map. It can be None. (DataField)
  • points - Array of XYZ points. Coordinates X and Y represent positions in the plane; the Z-coordinate represents values. (const-XYZ*)
  • npoints - Number of points. (int)

Since: 2.44

xdwt(wt_coefs, direction, minsize)

source code 

Performs steps of the X-direction image wavelet decomposition.

The smallest low pass coefficients block is equal to minsize. Run with minsize = dfield->xres/2 to perform one step of decomposition or minsize = 4 to perform full decomposition (or anything between).

Parameters:
  • wt_coefs - Data line where the wavelet transform coefficients are stored. (DataLine)
  • direction - Transform direction. Expected values: TRANSFORM_DIRECTION_BACKWARD, TRANSFORM_DIRECTION_FORWARD. (TransformDirection)
  • minsize - size of minimal transform result block (int)

ydwt(wt_coefs, direction, minsize)

source code 

Performs steps of the Y-direction image wavelet decomposition.

The smallest low pass coefficients block is equal to minsize. Run with minsize = dfield->yres/2 to perform one step of decomposition or minsize = 4 to perform full decomposition (or anything between).

Parameters:
  • wt_coefs - Data line where the wavelet transform coefficients are stored. (DataLine)
  • direction - Transform direction. Expected values: TRANSFORM_DIRECTION_BACKWARD, TRANSFORM_DIRECTION_FORWARD. (TransformDirection)
  • minsize - size of minimal transform result block (int)

dwt(wt_coefs, direction, minsize)

source code 

Performs steps of the 2D image wavelet decomposition.

The smallest low pass coefficients block is equal to minsize. Run with minsize = dfield->xres/2 to perform one step of decomposition or minsize = 4 to perform full decomposition (or anything between).

Parameters:
  • wt_coefs - Data line where the wavelet transform coefficients are stored. (DataLine)
  • direction - Transform direction. Expected values: TRANSFORM_DIRECTION_BACKWARD, TRANSFORM_DIRECTION_FORWARD. (TransformDirection)
  • minsize - size of minimal transform result block (int)

dwt_mark_anisotropy(mask, wt_coefs, ratio, lowlimit)

source code 

Performs steps of the 2D image wavelet decomposition.

The smallest low pass coefficients block is equal to minsize. Run with minsize = dfield->xres/2 to perform one step of decomposition or minsize = 4 to perform full decomposition (or anything between).

Parameters:
  • mask - (DataField)
  • wt_coefs - Data line to store wavelet transform coefficients to. (DataLine)
  • ratio - (float)
  • lowlimit - (int)

elliptic_area_fill(col, row, width, height, value)

source code 

Fills an elliptic region of a data field with given value.

The elliptic region is defined by its bounding box. In versions prior to 2.59 the bounding box must be completely contained in the data field. Since version 2.59 the ellipse can intersect the data field in any manner.

Parameters:
  • col - Upper-left bounding box column coordinate. (int)
  • row - Upper-left bounding box row coordinate. (int)
  • width - Bounding box width (number of columns). (int)
  • height - Bounding box height (number of rows). (int)
  • value - Value to be entered. (float)
Returns:
The number of filled values. (int)

get_elliptic_intersection(col, row, width, height)

source code 

Calculates an upper bound of the number of samples in an elliptic region intersecting a data field.

Parameters:
  • col - Upper-left bounding box column coordinate. (int)
  • row - Upper-left bounding box row coordinate. (int)
  • width - Bounding box width. (int)
  • height - Bounding box height. (int)
Returns:
The number of pixels in an elliptic region with given rectangular bounds (or its upper bound). (int)

Since: 2.59

circular_area_fill(col, row, radius, value)

source code 

Fills an elliptic region of a data field with given value.

Parameters:
  • col - Row index of circular area centre. (int)
  • row - Column index of circular area centre. (int)
  • radius - Circular area radius (in pixels). Any value is allowed, although to get areas that do not deviate from true circles after pixelization too much, half-integer values are recommended, integer values are NOT recommended. (float)
  • value - Value to be entered. (float)
Returns:
The number of filled values. (int)

normalize()

source code 

Normalizes data in a data field to range 0.0 to 1.0.

It is equivalent to DataField.renormalize(data_field, 1.0, 0.0);

If data_field is filled with only one value, it is changed to 0.0.

renormalize(range, offset)

source code 

Transforms data in a data field with linear function to given range.

When range is positive, the new data range is (offset, offset+range); when range is negative, the new data range is (offset-range, offset). In neither case the data are flipped, negative range only means different selection of boundaries.

When range is zero, this method is equivalent to DataField.fill(data_field, offset).

Parameters:
  • range - New data interval size. (float)
  • offset - New data interval offset. (float)

area_renormalize(col, row, width, height, range, offset)

source code 

Transforms data in a part of a data field with linear function to given range.

When range is positive, the new data range is (offset, offset+range); when range is negative, the new data range is (offset-range, offset). In neither case the data are flipped, negative range only means different selection of boundaries.

When range is zero, this method is equivalent to DataField.fill(data_field, offset).

Parameters:
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • range - New data interval size. (float)
  • offset - New data interval offset. (float)

Since: 2.45

threshold(threshval, bottom, top)

source code 

Tresholds values of a data field.

Values smaller than threshold are set to value bottom, values higher than threshold or equal to it are set to value top

Parameters:
  • threshval - Threshold value. (float)
  • bottom - Lower replacement value. (float)
  • top - Upper replacement value. (float)
Returns:
The total number of values above threshold. (int)

area_threshold(col, row, width, height, threshval, bottom, top)

source code 

Tresholds values of a rectangular part of a data field.

Values smaller than threshold are set to value bottom, values higher than threshold or equal to it are set to value top

Parameters:
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • threshval - Threshold value. (float)
  • bottom - Lower replacement value. (float)
  • top - Upper replacement value. (float)
Returns:
The total number of values above threshold. (int)

clamp(bottom, top)

source code 

Limits data field values to a range.

Parameters:
  • bottom - Lower limit value. (float)
  • top - Upper limit value. (float)
Returns:
The number of changed values, i.e., values that were outside [bottom, top]. (int)

area_clamp(col, row, width, height, bottom, top)

source code 

Limits values in a rectangular part of a data field to a range.

Parameters:
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • bottom - Lower limit value. (float)
  • top - Upper limit value. (float)
Returns:
The number of changed values, i.e., values that were outside [bottom, top]. (int)

area_gather(result, buffer, hsize, vsize, average, col, row, width, height)

source code 

Sums or averages values in reactangular areas around each sample in a data field.

When the gathered area extends out of calculation area, only samples from their intersection are taken into the local sum (or average).

There are no restrictions on values of hsize and vsize with regard to width and height, but they have to be positive.

The result is calculated by the means of two-dimensional rolling sums. One one hand it means the calculation time depends linearly on (width + hsize)*(height + vsize) instead of width*hsize*height*vsize. On the other hand it means absolute rounding errors of all output values are given by the largest input values, that is relative precision of results small in absolute value may be poor.

Parameters:
  • result - A data field to put the result to, it may be data_field itself. (DataField)
  • buffer - A data field to use as a scratch area, its size must be at least width*height. May be None to allocate a private temporary buffer. (DataField)
  • hsize - Horizontal size of gathered area. The area is centered around each sample if hsize is odd, it extends one pixel more to the right if hsize is even. (int)
  • vsize - Vertical size of gathered area. The area is centered around each sample if vsize is odd, it extends one pixel more down if vsize is even. (int)
  • average - True to divide resulting sums by the number of involved samples to get averages instead of sums. (bool)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)

convolve(kernel_field)

source code 

Convolves a data field with given kernel.

Note that the convolution is done by summation and can be slow for large kernels.

Parameters:
  • kernel_field - Kenrel field to convolve data_field with. (DataField)

area_convolve(kernel_field, col, row, width, height)

source code 

Convolves a rectangular part of a data field with given kernel.

Note that the convolution is done by summation and can be slow for large kernels.

Parameters:
  • kernel_field - Kenrel field to convolve data_field with. (DataField)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)

fft_convolve(kernel_field)

source code 

Convolves a data field with given kernel of the same size using FFT.

This is a simple FFT-based convolution done by multiplication in the frequency domain.

This is a somewhat low-level function. There is no padding or boundary treatment; images are considered periodic. The result is normalised as if the convolution was done by summation and the physical units of data_field are unchanged.

Also note that in order to obtain unshifted result, the kernel needs to be centered around the top left corner. You can use DataField.fft2d_dehumanize() to transform a centered kernel.

Parameters:
  • kernel_field - Kenrel field to convolve data_field with. It must have the same size as data_field. (DataField)

Since: 2.54

area_ext_convolve(col, row, width, height, target, kernel, exterior, fill_value, as_integral)

source code 

Convolve a field with a two-dimensional kernel.

Pixel dimensions of target may match either field or just the rectangular area. In the former case the result is written in the same rectangular area; in the latter case the result fills the entire target.

The convolution is performed with the kernel centred on the respective field pixels. For directions in which the kernel has an odd size this holds precisely. For an even-sized kernel this means the kernel centre is placed 0.5 pixel left or up (towards lower indices) from the respective field pixel.

See DataField.extend() for what constitutes the exterior and how it is handled.

If as_integral is False the function performs a simple discrete convolution sum and the value units of target are set to product of field and kernel units.

If as_integral is True the function approximates a convolution integral. In this case kernel should be a sampled continuous transfer function. The units of value target are set to product of field and kernel value units and field lateral units squared. Furthermore, the discrete sum is multiplied by the pixel size (i.e. dx dy in the integral).

In either case, the lateral units and pixel size of kernel are assumed to be the same as for field (albeit not checked), because the convolution does not make sense otherwise.

Parameters:
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • target - A two-dimensional data field where the result will be placed. It may be field for an in-place modification. (DataField)
  • kernel - Kernel to convolve field with. (DataField)
  • exterior - Exterior pixels handling. Expected values: EXTERIOR_UNDEFINED, EXTERIOR_BORDER_EXTEND, EXTERIOR_MIRROR_EXTEND, EXTERIOR_PERIODIC, EXTERIOR_FIXED_VALUE, EXTERIOR_LAPLACE. (ExteriorType)
  • fill_value - The value to use with EXTERIOR_FIXED_VALUE exterior. (float)
  • as_integral - True for normalisation and units as a convolution integral, False as a sum. (bool)

Since: 2.49

convolve_1d(kernel_line, orientation)

source code 

Convolves a data field with given linear kernel.

Parameters:

Since: 2.4

area_convolve_1d(kernel_line, orientation, col, row, width, height)

source code 

Convolves a rectangular part of a data field with given linear kernel.

For large separable kernels it can be more efficient to use a sequence of horizontal and vertical convolutions instead one 2D convolution.

Parameters:
  • kernel_line - Kernel line to convolve data_field with. (DataLine)
  • orientation - Filter orientation (ORIENTATION_HORIZONTAL for row-wise convolution, ORIENTATION_VERTICAL for column-wise convolution). Expected values: ORIENTATION_HORIZONTAL, ORIENTATION_VERTICAL. (Orientation)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)

Since: 2.4

area_ext_row_convolve(col, row, width, height, target, kernel, exterior, fill_value, as_integral)

source code 

Convolve a field row-wise with a one-dimensional kernel.

Pixel dimensions of target may match either field or just the rectangular area. In the former case the result is written in the same rectangular area; in the latter case the result fills the entire target.

The convolution is performed with the kernel centred on the respective field pixels. For an odd-sized kernel this holds precisely. For an even-sized kernel this means the kernel centre is placed 0.5 pixel to the left (towards lower column indices) from the respective field pixel.

See DataField.extend() for what constitutes the exterior and how it is handled.

If as_integral is False the function performs a simple discrete convolution sum and the value units of target are set to product of field and kernel units.

If as_integral is True the function approximates a convolution integral. In this case kernel should be a sampled continuous transfer function. The units of value target are set to product of field and kernel value units and field lateral units. Furthermore, the discrete sum is multiplied by the pixel size (i.e. dx in the integral).

In either case, the lateral units and pixel size of kernel are assumed to be the same as for a field's row (albeit not checked), because the convolution does not make sense otherwise.

Parameters:
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • target - A two-dimensional data field where the result will be placed. It may be field for an in-place modification. (DataField)
  • kernel - Kernel to convolve field with. (DataLine)
  • exterior - Exterior pixels handling. Expected values: EXTERIOR_UNDEFINED, EXTERIOR_BORDER_EXTEND, EXTERIOR_MIRROR_EXTEND, EXTERIOR_PERIODIC, EXTERIOR_FIXED_VALUE, EXTERIOR_LAPLACE. (ExteriorType)
  • fill_value - The value to use with EXTERIOR_FIXED_VALUE exterior. (float)
  • as_integral - True for normalisation and units as a convolution integral, False as a sum. (bool)

Since: 2.49

filter_median(size)

source code 

Filters a data field with median filter.

This method uses a simple square kernel. Use the general function DataField.area_filter_kth_rank() to perform filtering with a different kernel, for instance circular.

Parameters:
  • size - Size of area to take median of. (int)

area_filter_median(size, col, row, width, height)

source code 

Filters a rectangular part of a data field with median filter.

This method uses a simple square kernel. Use the general function DataField.area_filter_kth_rank() to perform filtering with a different kernel, for instance circular.

Parameters:
  • size - Size of area to take median of. (int)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)

filter_mean(size)

source code 

Filters a data field with mean filter of size size.

This method is a simple DataField.area_gather() wrapper, so the kernel is square. Use convolution DataField.area_ext_convolve() to perform a mean filter with different, for instance circular, kernel.

Parameters:
  • size - Averaged area size. (int)

area_filter_mean(size, col, row, width, height)

source code 

Filters a rectangular part of a data field with mean filter of size size.

This method is a simple DataField.area_gather() wrapper, so the kernel is square. Use convolution DataField.area_ext_convolve() to perform a mean filter with different, for instance circular, kernel.

Parameters:
  • size - Averaged area size. (int)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)

filter_conservative(size)

source code 

Filters a data field with conservative denoise filter.

Parameters:
  • size - Filtered area size. (int)

area_filter_conservative(size, col, row, width, height)

source code 

Filters a rectangular part of a data field with conservative denoise filter.

Parameters:
  • size - Filtered area size. (int)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)

area_filter_laplacian(col, row, width, height)

source code 

Filters a rectangular part of a data field with Laplacian filter.

Parameters:
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)

filter_laplacian_of_gaussians()

source code 

Filters a data field with Laplacian of Gaussians filter.

Since: 2.23

area_filter_laplacian_of_gaussians(col, row, width, height)

source code 

Filters a rectangular part of a data field with Laplacian of Gaussians filter.

Parameters:
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)

Since: 2.23

filter_sobel(orientation)

source code 

Filters a data field with a directional Sobel filter.

Parameters:
  • orientation - Filter orientation. Expected values: ORIENTATION_HORIZONTAL, ORIENTATION_VERTICAL. (Orientation)

area_filter_sobel(orientation, col, row, width, height)

source code 

Filters a rectangular part of a data field with a directional Sobel filter.

Parameters:
  • orientation - Filter orientation. Expected values: ORIENTATION_HORIZONTAL, ORIENTATION_VERTICAL. (Orientation)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)

filter_sobel_total()

source code 

Filters a data field with total Sobel filter.

Since: 2.31

filter_prewitt(orientation)

source code 

Filters a data field with Prewitt filter.

Parameters:
  • orientation - Filter orientation. Expected values: ORIENTATION_HORIZONTAL, ORIENTATION_VERTICAL. (Orientation)

area_filter_prewitt(orientation, col, row, width, height)

source code 

Filters a rectangular part of a data field with a directional Prewitt filter.

Parameters:
  • orientation - Filter orientation. Expected values: ORIENTATION_HORIZONTAL, ORIENTATION_VERTICAL. (Orientation)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)

filter_prewitt_total()

source code 

Filters a data field with total Prewitt filter.

Since: 2.31

filter_slope(xder, yder)

source code 

Calculates x and y derivaties for an entire field.

The derivatives are calculated as the simple symmetrical differences (in physical units, not pixel-wise), except at the edges where the differences are one-sided.

Parameters:
  • xder - Data field where the x-derivarive is to be stored, or None if you are only interested in the y-derivarive. (DataField)
  • yder - Data field where the y-derivarive is to be stored, or None if you are only interested in the x-derivarive. (DataField)

Since: 2.37

filter_gauss_step(sigma)

source code 

Processes a data field with Gaussian step detection filter.

The filter is a multi-directional combination of convolutions with Gaussian multiplied by a signed step function.

The resulting values correspond roughly to the step height around the pixel.

Parameters:
  • sigma - Gaussian filter width (in pixels). (float)

Since: 2.54

filter_dechecker()

source code 

Filters a data field with 5x5 checker pattern removal filter.

Since: 2.1

area_filter_dechecker(col, row, width, height)

source code 

Filters a rectangular part of a data field with 5x5 checker pattern removal filter.

Parameters:
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)

Since: 2.1

filter_gaussian(sigma)

source code 

Filters a data field with a Gaussian filter.

Parameters:
  • sigma - The sigma parameter of the Gaussian. (float)

Since: 2.4

area_filter_gaussian(sigma, col, row, width, height)

source code 

Filters a rectangular part of a data field with a Gaussian filter.

The Gausian is normalized, i.e. it is sum-preserving.

Parameters:
  • sigma - The sigma parameter of the Gaussian. (float)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)

Since: 2.4

row_gaussian(sigma)

source code 

Filters a data field with a Gaussian filter in horizontal direction.

The Gausian is normalized, i.e. it is sum-preserving.

Parameters:
  • sigma - The sigma parameter of the Gaussian. (float)

Since: 2.54

column_gaussian(sigma)

source code 

Filters a data field with a Gaussian filter in vertical direction.

The Gausian is normalized, i.e. it is sum-preserving.

Parameters:
  • sigma - The sigma parameter of the Gaussian. (float)

Since: 2.54

filter_minimum(size)

source code 

Filters a data field with minimum filter.

This method uses a simple square kernel. Use the general function DataField.area_filter_min_max() to perform filtering with a different kernel, for instance circular.

Parameters:
  • size - Neighbourhood size for minimum search. (int)

area_filter_minimum(size, col, row, width, height)

source code 

Filters a rectangular part of a data field with minimum filter.

This operation is often called erosion filter.

This method uses a simple square kernel. Use the general function DataField.area_filter_min_max() to perform filtering with a different kernel, for instance circular.

Parameters:
  • size - Neighbourhood size for minimum search. (int)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)

filter_maximum(size)

source code 

Filters a data field with maximum filter.

This method uses a simple square kernel. Use the general function DataField.area_filter_min_max() to perform filtering with a different kernel, for instance circular.

Parameters:
  • size - Neighbourhood size for maximum search. (int)

area_filter_maximum(size, col, row, width, height)

source code 

Filters a rectangular part of a data field with maximum filter.

This operation is often called dilation filter.

This method uses a simple square kernel. Use the general function DataField.area_filter_min_max() to perform filtering with a different kernel, for instance circular.

Parameters:
  • size - Neighbourhood size for maximum search. (int)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)

area_filter_min_max(kernel, filtertype, col, row, width, height)

source code 

Applies a morphological operation with a flat structuring element to a part of a data field.

Morphological operations with flat structuring elements can be expressed using minimum (erosion) and maximum (dilation) filters that are the basic operations this function can perform.

The kernel field is a mask that defines the shape of the flat structuring element. It is reflected for all maximum operations (dilation). For symmetrical kernels this does not matter. You can use DataField.elliptic_area_fill() to create a true circular (or elliptical) kernel.

The kernel is implicitly centered, i.e. it will be applied symmetrically to avoid unexpected data movement. Even-sized kernels (generally not recommended) will extend farther towards the top left image corner for minimum (erosion) and towards the bottom right corner for maximum (dilation) operations due to the reflection. If you need off-center structuring elements you can add empty rows or columns to one side of the kernel to counteract the symmetrisation.

The operation is linear-time in kernel size for any convex kernel. Note DataField.area_filter_minimum() and DataField.area_filter_maximum(), which are limited to square structuring elements, are much faster for large sizes of the squares.

The exterior is always handled as EXTERIOR_BORDER_EXTEND.

Parameters:
  • kernel - Data field defining the flat structuring element. (DataField)
  • filtertype - The type of filter to apply. Expected values: MIN_MAX_FILTER_MINIMUM, MIN_MAX_FILTER_EROSION, MIN_MAX_FILTER_MAXIMUM, MIN_MAX_FILTER_DILATION, MIN_MAX_FILTER_OPENING, MIN_MAX_FILTER_CLOSING, MIN_MAX_FILTER_RANGE, MIN_MAX_FILTER_NORMALIZATION. (MinMaxFilterType)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)

Since: 2.43

area_filter_disc_asf(radius, closing, col, row, width, height)

source code 

Applies an alternating sequential morphological filter with a flat disc structuring element to a part of a data field.

Alternating sequential filter is a filter consisting of repeated opening and closing (or closing and opening) with progressively larger structuring elements. This function performs such filtering for sequence of structuring elements consisting of true Euclidean discs with increasing radii. The largest disc in the sequence fits into a (2size + 1) × (2size + 1) square.

Parameters:
  • radius - Maximum radius of the circular structuring element, in pixels. For radius 0 and smaller the filter is no-op. (int)
  • closing - True requests an opening-closing filter (i.e. ending with closing), False requests a closing-opening filter (i.e. ending with opening). (bool)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)

Since: 2.43

area_filter_kth_rank(kernel, col, row, width, height, k)

source code 

Applies a k-th rank filter to a part of a data field.

Pass half the number of non-zero values in kernel as k for a median filter.

The kernel field is a mask that defines the shape of the kernel. You can use DataField.elliptic_area_fill() to create a true circular (or elliptical) kernel. The kernel must be non-empty.

The kernel is implicitly centered, i.e. it will be applied symmetrically to avoid unexpected data movement. Even-sized kernels (generally not recommended) will extend farther towards the top left image corner for minimum (erosion) and towards the bottom right corner for maximum (dilation) operations due to the reflection. If you need off-center structuring elements you can add empty rows or columns to one side of the kernel to counteract the symmetrisation.

The exterior is always handled as EXTERIOR_BORDER_EXTEND.

If the operation is aborted the contents of data_field is untouched.

Parameters:
  • kernel - Data field defining the kernel shape. (DataField)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • k - Rank of the value to store as the output (from lowest to highest). (int)
Returns:
Tuple consisting of 2 values (value, set_fraction). ((bool), (SkipArg))

Since: 2.51

area_filter_trimmed_mean(kernel, col, row, width, height, nlowest, nhighest)

source code 

Applies a trimmed mean filter to a part of a data field.

At least one value must remain after the trimming, i.e. nlowest + nhighest must be smaller than the number of non-zero values in kernel. Usually one passes the same number as both nlowest and nhighest, but it is not a requirement.

The kernel field is a mask that defines the shape of the kernel. You can use DataField.elliptic_area_fill() to create a true circular (or elliptical) kernel. The kernel must be non-empty.

The kernel is implicitly centered, i.e. it will be applied symmetrically to avoid unexpected data movement. Even-sized kernels (generally not recommended) will extend farther towards the top left image corner for minimum (erosion) and towards the bottom right corner for maximum (dilation) operations due to the reflection. If you need off-center structuring elements you can add empty rows or columns to one side of the kernel to counteract the symmetrisation.

The exterior is always handled as EXTERIOR_BORDER_EXTEND.

If the operation is aborted the contents of data_field is untouched.

Parameters:
  • kernel - Data field defining the kernel shape. (DataField)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • nlowest - The number of lowest values to discard. (int)
  • nhighest - The number of highest values to discard. (int)
Returns:
Tuple consisting of 2 values (value, set_fraction). ((bool), (SkipArg))

Since: 2.53

filter_rms(size)

source code 

Filters a data field with RMS filter.

Parameters:
  • size - Area size. (int)

area_filter_rms(size, col, row, width, height)

source code 

Filters a rectangular part of a data field with RMS filter of size size.

RMS filter computes root mean square in given area.

Parameters:
  • size - Area size. (int)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)

area_filter_kuwahara(col, row, width, height)

source code 

Filters a rectangular part of a data field with a Kuwahara (edge-preserving smoothing) filter.

Parameters:
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)

filter_canny(threshold)

source code 

Filters a rectangular part of a data field with canny edge detector filter.

Parameters:
  • threshold - Slope detection threshold (range 0..1). (float)

shade(target_field, theta, phi)

source code 

Shades a data field.

Parameters:
  • target_field - A data field to put the shade to. It will be resized to match data_field. (DataField)
  • theta - Shading angle (in radians, from north pole). (float)
  • phi - Shade orientation in xy plane (in radians, counterclockwise). (float)

filter_harris(y_gradient, result, neighbourhood, alpha)

source code 

Applies Harris corner detection filter to a pair of gradient data fields.

All passed data field must have the same size.

Parameters:
  • y_gradient - Data field with pre-calculated vertical derivative. (DataField)
  • result - Data field for the result. (DataField)
  • neighbourhood - Neighbourhood size. (int)
  • alpha - Sensitivity paramter (the squared trace is multiplied by it). (float)

deconvolve_regularized(operand, out, sigma)

source code 

Performs deconvolution of a data field using a simple regularization.

The operation can be used to deblur an image or conversely recover the point spread function from ideal response image.

Convolving the result with the operand using DataField.area_ext_convolve() with as_integral=True will recover (approximately) the image. This means the deconvolution assumes continous convolution, not discrete sums. Note that for the latter case this means the point spread function will be centered in out.

For recovery of transfer function, dfield and operand should be windowed beforehand if they are not periodic.

Parameters:
  • operand - One of the factors entering the convolution resulting in dfield. It must have the same dimensions as dfield and it is assumed it has also the same physical size. (DataField)
  • out - Data field where to put the result into. It will be resized to match dfield. It can also be dfield itself. (DataField)
  • sigma - Regularization parameter. (float)

Since: 2.51

deconvolve_psf_leastsq(operand, out, sigma, border)

source code 

Performs reconstruction of transfer function from convolved and ideal sharp images.

The transfer function is reconstructed by solving the corresponding least squares problem. This method is suitable when the dimensions of out are much smaller than the images.

Since the method accumulates errors close to edges, they can be removed within the procedure by reconstructing a slightly larger transfer function and then cutting the result. The extension is given by border, typical suitable values are 2 or 3.

Convolving the result with the operand using DataField.area_ext_convolve() with as_integral=True will recover (approximately) the image. This means the deconvolution assumes continous convolution, not discrete sums. Note that for the latter case this means the point spread function will be centered in out.

Fields dfield and operand should be windowed beforehand if they are not periodic.

Parameters:
  • operand - Ideal sharp measurement (before convolution). It must have the same dimensions as dfield and it is assumed it has also the same physical size. (DataField)
  • out - Output field for the transfer function. Its dimensions are preserved and determine the transfer function support. It must be smaller than half of dfield. (DataField)
  • sigma - Regularization parameter. (float)
  • border - Number of pixel to extend and cut off the transfer function. (int)

Since: 2.52

find_regularization_sigma_for_psf(ideal)

source code 

Finds regularization parameter for point spread function calculation using regularized deconvolution.

The estimated value should be suitable for reconstruction of the point spread function using DataField.deconvolve_regularized(). The estimate is only suitable for PSF, it does not work for image sharpening using a known PSF.

Parameters:
  • ideal - A data field with ideal sharp data. (DataField)
Returns:
Estimated regularization parameter. (float)

Since: 2.51

find_regularization_sigma_leastsq(ideal, width, height, border)

source code 

Finds regularization parameter for point spread function calculation using least squares method.

The estimated value should be suitable for reconstruction of the point spread function using DataField.deconvolve_psf_leastsq().

Parameters:
  • ideal - A data field with ideal sharp data. (DataField)
  • width - Horizontal size of transfer function support. (int)
  • height - Vertical size of transfer function support. (int)
  • border - Number of pixel to extend and cut off the transfer function. (int)
Returns:
Estimated regularization parameter. (float)

Since: 2.52

fractal_partitioning(xresult, yresult, interpolation)

source code 

Computes data for log-log plot by partitioning.

Data lines xresult and yresult will be resized to the output size and they will contain corresponding values at each position.

Parameters:
  • xresult - Data line to store x-values for log-log plot to. (DataLine)
  • yresult - Data line to store y-values for log-log plot to. (DataLine)
  • interpolation - Interpolation type. Expected values: INTERPOLATION_NONE, INTERPOLATION_ROUND, INTERPOLATION_LINEAR, INTERPOLATION_BILINEAR, INTERPOLATION_KEY, INTERPOLATION_BSPLINE, INTERPOLATION_OMOMS, INTERPOLATION_NNA, INTERPOLATION_SCHAUM. (InterpolationType)

fractal_cubecounting(xresult, yresult, interpolation)

source code 

Computes data for log-log plot by cube counting.

Data lines xresult and yresult will be resized to the output size and they will contain corresponding values at each position.

Parameters:
  • xresult - Data line to store x-values for log-log plot to. (DataLine)
  • yresult - Data line to store y-values for log-log plot to. (DataLine)
  • interpolation - Interpolation type. Expected values: INTERPOLATION_NONE, INTERPOLATION_ROUND, INTERPOLATION_LINEAR, INTERPOLATION_BILINEAR, INTERPOLATION_KEY, INTERPOLATION_BSPLINE, INTERPOLATION_OMOMS, INTERPOLATION_NNA, INTERPOLATION_SCHAUM. (InterpolationType)

fractal_triangulation(xresult, yresult, interpolation)

source code 

Computes data for log-log plot by triangulation.

Data lines xresult and yresult will be resized to the output size and they will contain corresponding values at each position.

Parameters:
  • xresult - Data line to store x-values for log-log plot to. (DataLine)
  • yresult - Data line to store y-values for log-log plot to. (DataLine)
  • interpolation - Interpolation type. Expected values: INTERPOLATION_NONE, INTERPOLATION_ROUND, INTERPOLATION_LINEAR, INTERPOLATION_BILINEAR, INTERPOLATION_KEY, INTERPOLATION_BSPLINE, INTERPOLATION_OMOMS, INTERPOLATION_NNA, INTERPOLATION_SCHAUM. (InterpolationType)

fractal_psdf(xresult, yresult, interpolation)

source code 

Computes data for log-log plot by spectral density method.

Data lines xresult and yresult will be resized to the output size and they will contain corresponding values at each position.

Parameters:
  • xresult - Data line to store x-values for log-log plot to. (DataLine)
  • yresult - Data line to store y-values for log-log plot to. (DataLine)
  • interpolation - Interpolation type. Expected values: INTERPOLATION_NONE, INTERPOLATION_ROUND, INTERPOLATION_LINEAR, INTERPOLATION_BILINEAR, INTERPOLATION_KEY, INTERPOLATION_BSPLINE, INTERPOLATION_OMOMS, INTERPOLATION_NNA, INTERPOLATION_SCHAUM. (InterpolationType)

fractal_correction(mask_field, interpolation)

source code 

Replaces data under mask with interpolated values using fractal interpolation.

Parameters:
  • mask_field - Mask of places to be corrected. (DataField)
  • interpolation - Interpolation type. Expected values: INTERPOLATION_NONE, INTERPOLATION_ROUND, INTERPOLATION_LINEAR, INTERPOLATION_BILINEAR, INTERPOLATION_KEY, INTERPOLATION_BSPLINE, INTERPOLATION_OMOMS, INTERPOLATION_NNA, INTERPOLATION_SCHAUM. (InterpolationType)

grains_mark_curvature(grain_field, threshval, below)

source code 

Marks data that are above/below curvature threshold.

Parameters:
  • grain_field - Data field to store the resulting mask to. (DataField)
  • threshval - Relative curvature threshold, in percents. (float)
  • below - If True, data below threshold are marked, otherwise data above threshold are marked. (bool)

grains_mark_watershed(grain_field, locate_steps, locate_thresh, locate_dropsize, wshed_steps, wshed_dropsize, prefilter, below)

source code 

Performs watershed algorithm.

Parameters:
  • grain_field - Result of marking (mask). (DataField)
  • locate_steps - Locating algorithm steps. (int)
  • locate_thresh - Locating algorithm threshold. (int)
  • locate_dropsize - Locating drop size. (float)
  • wshed_steps - Watershed steps. (int)
  • wshed_dropsize - Watershed drop size. (float)
  • prefilter - Use prefiltering. (bool)
  • below - If True, valleys are marked, otherwise mountains are marked. (bool)

grains_remove_grain(col, row)

source code 

Removes one grain at given position.

Parameters:
  • col - Column inside a grain. (int)
  • row - Row inside a grain. (int)
Returns:
True if a grain was actually removed, i.e. (col,row) was inside a grain. (bool)

grains_extract_grain(col, row)

source code 

Removes all grains except that one at given position.

If there is no grain at (col, row), all grains are removed.

Parameters:
  • col - Column inside a grain. (int)
  • row - Row inside a grain. (int)
Returns:
True if a grain remained (i.e., (col,row) was inside a grain). (bool)

grains_remove_by_number(number)

source code 

Removes grain identified by number.

Parameters:

Since: 2.35

grains_remove_by_size(size)

source code 

Removes all grains below specified area.

Parameters:
  • size - Grain area threshold, in square pixels. (int)

grains_remove_by_height(grain_field, threshval, below)

source code 

Removes grains that are higher/lower than given threshold value.

Parameters:
  • grain_field - Field of marked grains (mask) (DataField)
  • threshval - Relative height threshold, in percents. (float)
  • below - If True, grains below threshold are removed, otherwise grains above threshold are removed. (bool)

grains_remove_touching_border()

source code 

Removes all grains that touch field borders.

Since: 2.30

grains_watershed_init(grain_field, locate_steps, locate_thresh, locate_dropsize, wshed_steps, wshed_dropsize, prefilter, below)

source code 

Initializes the watershed algorithm.

This iterator reports its state as WatershedStateType.

Parameters:
  • grain_field - Result of marking (mask). (DataField)
  • locate_steps - Locating algorithm steps. (int)
  • locate_thresh - Locating algorithm threshold. (int)
  • locate_dropsize - Locating drop size. (float)
  • wshed_steps - Watershed steps. (int)
  • wshed_dropsize - Watershed drop size. (float)
  • prefilter - Use prefiltering. (bool)
  • below - If True, valleys are marked, otherwise mountains are marked. (bool)
Returns:
A new watershed iterator. (ComputationState*)

grains_mark_height(grain_field, threshval, below)

source code 

Marks data that are above/below height threshold.

Parameters:
  • grain_field - Data field to store the resulting mask to. (DataField)
  • threshval - Relative height threshold, in percents. (float)
  • below - If True, data below threshold are marked, otherwise data above threshold are marked. (bool)

grains_mark_slope(grain_field, threshval, below)

source code 

Marks data that are above/below slope threshold.

Parameters:
  • grain_field - Data field to store the resulting mask to. (DataField)
  • threshval - Relative slope threshold, in percents. (float)
  • below - If True, data below threshold are marked, otherwise data above threshold are marked. (bool)

otsu_threshold()

source code 

Finds Otsu's height threshold for a data field.

The Otsu's threshold is optimal in the sense that it minimises the inter-class variances of two classes of pixels: above and below theshold.

Returns:
(float)

Since: 2.37

grains_add(add_field)

source code 

Adds add_field grains to grain_field.

Note: This function is equivalent to |[ DataField.max_of_fields(grain_field, grain_field, add_field); ]|

Parameters:
  • add_field - Field of marked grains (mask) to be added. (DataField)

grains_intersect(intersect_field)

source code 

Performs intersection betweet two grain fields, result is stored in grain_field.

Note: This function is equivalent to |[ DataField.min_of_fields(grain_field, grain_field, intersect_field); ]|

Parameters:
  • intersect_field - Field of marked grains (mask). (DataField)

grains_invert()

source code 

Inverts a data field representing a mask.

All non-positive values are transformed to 1.0. All positive values are transformed to 0.0.

Since: 2.43

grains_autocrop(symmetrically)

source code 

Removes empty border rows and columns from a data field representing a mask.

If there are border rows and columns filled completely with non-positive values the size of the data field is reduced, removing these rows. The parameter symmetrically controls whether the size reduction is maximum possible or symmetrical.

When there is no positive value in the field the field size is reduced to the smallest possible. This means 1x1 for symmetrical being False and even original dimensions to 2 for symmetrical being True.

Parameters:
  • symmetrically - True to remove borders symmetrically, i.e the same number of pixels from left and right, and also top and bottom. False to remove as many empty rows and columns as possible. (bool)
Returns:
Tuple consisting of 5 values (value, left, right, up, down). ((bool), (int), (int), (int), (int))

Since: 2.43

area_grains_tgnd(target_line, col, row, width, height, below, nstats)

source code 

Calculates threshold grain number distribution.

This function is a simple DataField.area_grains_tgnd_range() that calculates the distribution in the full range.

Parameters:
  • target_line - A data line to store the distribution to. It will be resampled to the requested width. (DataLine)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • below - If True, valleys are marked, otherwise mountains are marked. (bool)
  • nstats - The number of samples to take on the distribution function. If nonpositive, a suitable resolution is determined automatically. (int)

area_grains_tgnd_range(target_line, col, row, width, height, min, max, below, nstats)

source code 

Calculates threshold grain number distribution in given height range.

This is the number of grains for each of nstats equidistant height threshold levels. For large nstats this function is much faster than the equivalent number of DataField.grains_mark_height() calls.

Parameters:
  • target_line - A data line to store the distribution to. It will be resampled to the requested width. (DataLine)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • min - Minimum threshold value. (float)
  • max - Maximum threshold value. (float)
  • below - If True, valleys are marked, otherwise mountains are marked. (bool)
  • nstats - The number of samples to take on the distribution function. If nonpositive, a suitable resolution is determined automatically. (int)

grain_distance_transform()

source code 

Performs Euclidean distance transform of a data field with grains.

Each non-zero value will be replaced with Euclidean distance to the grain boundary, measured in pixels.

See also DataField.grain_simple_dist_trans() for simple distance transforms such as city-block or chessboard.

Since: 2.36

grain_simple_dist_trans(dtype, from_border)

source code 

Performs a distance transform of a data field with grains.

Each non-zero value will be replaced with a distance to the grain boundary, measured in pixels.

Note this function can calculate the true Euclidean distance transform only since 2.43. Use DataField.grain_distance_transform() for the EDT if you need compatibility with older versions.

Parameters:
  • dtype - Type of simple distance to use. Expected values: DISTANCE_TRANSFORM_CITYBLOCK, DISTANCE_TRANSFORM_CONN4, DISTANCE_TRANSFORM_CHESS, DISTANCE_TRANSFORM_CONN8, DISTANCE_TRANSFORM_OCTAGONAL48, DISTANCE_TRANSFORM_OCTAGONAL84, DISTANCE_TRANSFORM_OCTAGONAL, DISTANCE_TRANSFORM_EUCLIDEAN. (DistanceTransformType)
  • from_border - True to consider image edges to be grain boundaries. (bool)

Since: 2.41

grains_shrink(amount, dtype, from_border)

source code 

Erodes a data field containing mask by specified amount using a distance measure.

Non-zero pixels in data_field will be replaced with zeroes if they are not farther than amount from the grain boundary as defined by dtype.

Parameters:
  • amount - How much the grains should be reduced, in pixels. It is inclusive, i.e. pixels that are amount far from the border will be removed. (float)
  • dtype - Type of simple distance to use. Expected values: DISTANCE_TRANSFORM_CITYBLOCK, DISTANCE_TRANSFORM_CONN4, DISTANCE_TRANSFORM_CHESS, DISTANCE_TRANSFORM_CONN8, DISTANCE_TRANSFORM_OCTAGONAL48, DISTANCE_TRANSFORM_OCTAGONAL84, DISTANCE_TRANSFORM_OCTAGONAL, DISTANCE_TRANSFORM_EUCLIDEAN. (DistanceTransformType)
  • from_border - True to consider image edges to be grain boundaries. False to reduce grains touching field boundaries only along the boundaries. (bool)

Since: 2.43

grains_grow(amount, dtype, prevent_merging)

source code 

Dilates a data field containing mask by specified amount using a distance measure.

Non-positive pixels in data_field will be replaced with ones if they are not farther than amount from the grain boundary as defined by dtype.

Parameters:
  • amount - How much the grains should be expanded, in pixels. It is inclusive, i.e. exterior pixels that are amount far from the border will be filled. (float)
  • dtype - Type of simple distance to use. Expected values: DISTANCE_TRANSFORM_CITYBLOCK, DISTANCE_TRANSFORM_CONN4, DISTANCE_TRANSFORM_CHESS, DISTANCE_TRANSFORM_CONN8, DISTANCE_TRANSFORM_OCTAGONAL48, DISTANCE_TRANSFORM_OCTAGONAL84, DISTANCE_TRANSFORM_OCTAGONAL, DISTANCE_TRANSFORM_EUCLIDEAN. (DistanceTransformType)
  • prevent_merging - True to prevent grain merging, i.e. the growth stops where two grains would merge. False to simply expand the grains, without regard to grain connectivity. (bool)

Since: 2.43

grains_thin()

source code 

Performs thinning of a data field containing mask.

The result of thinning is a ‘skeleton’ mask consisting of single-pixel thin lines.

Since: 2.48

fill_voids(nonsimple)

source code 

Fills voids in grains in a data field representing a mask.

Voids in grains are zero pixels in data_field from which no path exists through other zero pixels to the field boundary. The paths are considered in 8-connectivity because grains themselves are considered in 4-connectivity.

Parameters:
  • nonsimple - Pass True to fill also voids that are not simple-connected (e.g. ring-like). This can result in grain merging if a small grain is contained within a void. Pass False to fill only simple-connected grains. (bool)
Returns:
True if any voids were filled at all, False if no change was made. (bool)

Since: 2.37

mark_extrema(extrema, maxima)

source code 

Marks local maxima or minima in a two-dimensional field.

Local (or regional) maximum is a contiguous set of pixels that have the same value and this value is sharply greater than the value of any pixel touching the set. A minimum is defined analogously. A field filled with a single value is considered to have neither minimum nor maximum.

Parameters:
  • extrema - Target field for the extrema mask. (DataField)
  • maxima - True to mark maxima, False to mark minima. (bool)

Since: 2.37

zoom_fft(isrc, rdest, idest, mx, my, fx0, fy0, fx1, fy1)

source code 

Computes Zoom FFT of a data field.

The output is DFTs, but computed for an arbitrary 2D Cartesian grid of frequencies along x and y. The frequencies do not have to be in any relation to the data sampling step.

The top-left pixel of output corresponds exactly to (fx0,fy0) and the bottom right exactly to (fx1,fy1). So the frequency sampling steps will be (fx1fx0)/(mx − 1) and (fy1fy0)/(my − 1), instead of the more usual division by mx and my. To follow the usual Gwyddion conventions, the output data field real size will be (fx1fx0)/(mx − 1)*mx along x, and similarly along y. If it seems confusing, just take the output as indexed by integers and work with that.

Frequency step of one corresponds to the normal DFT frequency step. Therefore, passing fx0=0, fx1=xres–1, fy0=0, fy1=yres–1 (where rsrc has xres × yres points), mx=xres and my=yres reproduces the usual DFT, except more slowly. The result is normalised as raw FFT and the units of the output data fields are unchanged.

The transform direction is always forward. Windowing or other preprocessing need to be done separately beforehand. They would be usually once, but followed by any number of (Zoom) FFTs.

Parameters:
  • isrc - Imaginary input data field. It can be None for real-to-complex transform. (DataField)
  • rdest - Real output data field. It will be resized to mx × my samples. (DataField)
  • idest - Imaginary output data field. It will be resized to mx × my samples. (DataField)
  • mx - The number of horizontal frequencies to compute. It must be at least 2. (int)
  • my - The number of vertical frequencies to compute. It must be at least 2. (int)
  • fx0 - The first horizontal spatial frequency, measured in DFT frequency steps. (float)
  • fy0 - The first vertical spatial frequency, measured in DFT frequency steps. (float)
  • fx1 - The last horizontal spatial frequency, measured in DFT frequency steps. (float)
  • fy1 - The last vetical spatial frequency, measured in DFT frequency steps. (float)

Since: 2.62

fft1d(iin, rout, iout, orientation, windowing, direction, interpolation, preserverms, level)

source code 

Transforms all rows or columns in a data field with Fast Fourier Transform.

If requested a windowing and/or leveling is applied to preprocess data to obtain reasonable results.

Parameters:
  • iin - Imaginary input data field. It can be None for real-to-complex transform which can be somewhat faster than complex-to-complex transform. (DataField)
  • rout - Real output data field, it will be resized to area size. (DataField)
  • iout - Imaginary output data field, it will be resized to area size. (DataField)
  • orientation - Orientation: pass ORIENTATION_HORIZONTAL to transform rows, ORIENTATION_VERTICAL to transform columns. Expected values: ORIENTATION_HORIZONTAL, ORIENTATION_VERTICAL. (Orientation)
  • windowing - Windowing type. Expected values: WINDOWING_NONE, WINDOWING_HANN, WINDOWING_HAMMING, WINDOWING_BLACKMANN, WINDOWING_LANCZOS, WINDOWING_WELCH, WINDOWING_RECT, WINDOWING_NUTTALL, WINDOWING_FLAT_TOP, WINDOWING_KAISER25. (WindowingType)
  • direction - FFT direction. Expected values: TRANSFORM_DIRECTION_BACKWARD, TRANSFORM_DIRECTION_FORWARD. (TransformDirection)
  • interpolation - Interpolation type. Ignored since 2.8 as no resampling is performed. Expected values: INTERPOLATION_NONE, INTERPOLATION_ROUND, INTERPOLATION_LINEAR, INTERPOLATION_BILINEAR, INTERPOLATION_KEY, INTERPOLATION_BSPLINE, INTERPOLATION_OMOMS, INTERPOLATION_NNA, INTERPOLATION_SCHAUM. (InterpolationType)
  • preserverms - True to preserve RMS while windowing. (bool)
  • level - 0 to perform no leveling, 1 to subtract mean value, 2 to subtract line (the number can be interpreted as the first polynomial degree to keep, but only the enumerated three values are available). (int)

area_1dfft(iin, rout, iout, col, row, width, height, orientation, windowing, direction, interpolation, preserverms, level)

source code 

Transforms all rows or columns in a rectangular part of a data field with Fast Fourier Transform.

If requested a windowing and/or leveling is applied to preprocess data to obtain reasonable results.

Parameters:
  • iin - Imaginary input data field. It can be None for real-to-complex transform which can be somewhat faster than complex-to-complex transform. (DataField)
  • rout - Real output data field, it will be resized to area size. (DataField)
  • iout - Imaginary output data field, it will be resized to area size. (DataField)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns), must be at least 2 for horizontal transforms. (int)
  • height - Area height (number of rows), must be at least 2 for vertical transforms. (int)
  • orientation - Orientation: pass ORIENTATION_HORIZONTAL to transform rows, ORIENTATION_VERTICAL to transform columns. Expected values: ORIENTATION_HORIZONTAL, ORIENTATION_VERTICAL. (Orientation)
  • windowing - Windowing type. Expected values: WINDOWING_NONE, WINDOWING_HANN, WINDOWING_HAMMING, WINDOWING_BLACKMANN, WINDOWING_LANCZOS, WINDOWING_WELCH, WINDOWING_RECT, WINDOWING_NUTTALL, WINDOWING_FLAT_TOP, WINDOWING_KAISER25. (WindowingType)
  • direction - FFT direction. Expected values: TRANSFORM_DIRECTION_BACKWARD, TRANSFORM_DIRECTION_FORWARD. (TransformDirection)
  • interpolation - Interpolation type. Ignored since 2.8 as no resampling is performed. Expected values: INTERPOLATION_NONE, INTERPOLATION_ROUND, INTERPOLATION_LINEAR, INTERPOLATION_BILINEAR, INTERPOLATION_KEY, INTERPOLATION_BSPLINE, INTERPOLATION_OMOMS, INTERPOLATION_NNA, INTERPOLATION_SCHAUM. (InterpolationType)
  • preserverms - True to preserve RMS while windowing. (bool)
  • level - 0 to perform no leveling, 1 to subtract mean value, 2 to subtract lines (the number can be interpreted as the first polynomial degree to keep, but only the enumerated three values are available). (int)

fft1d_raw(iin, rout, iout, orientation, direction)

source code 

Transforms all rows or columns in a data field with Fast Fourier Transform.

No leveling, windowing nor scaling is performed.

The normalisation of FFT is symmetrical, so transformations in both directions are unitary.

Since 2.8 the dimensions need not to be from the set of sizes returned by gwy_fft_find_nice_size().

Parameters:
  • iin - Imaginary input data field. It can be None for real-to-complex transform. (DataField)
  • rout - Real output data field, it will be resized to rin size. (DataField)
  • iout - Imaginary output data field, it will be resized to rin size. (DataField)
  • orientation - Orientation: pass ORIENTATION_HORIZONTAL to transform rows, ORIENTATION_VERTICAL to transform columns. Expected values: ORIENTATION_HORIZONTAL, ORIENTATION_VERTICAL. (Orientation)
  • direction - FFT direction. Expected values: TRANSFORM_DIRECTION_BACKWARD, TRANSFORM_DIRECTION_FORWARD. (TransformDirection)

Since: 2.1

fft2d(iin, rout, iout, windowing, direction, interpolation, preserverms, level)

source code 

Calculates 2D Fast Fourier Transform of a rectangular a data field.

If requested a windowing and/or leveling is applied to preprocess data to obtain reasonable results.

Lateral dimensions, offsets and units are unchanged. See DataField.fft_postprocess() for that.

Parameters:
  • iin - Imaginary input data field. It can be None for real-to-complex transform which can be somewhat faster than complex-to-complex transform. (DataField)
  • rout - Real output data field, it will be resized to area size. (DataField)
  • iout - Imaginary output data field, it will be resized to area size. (DataField)
  • windowing - Windowing type. Expected values: WINDOWING_NONE, WINDOWING_HANN, WINDOWING_HAMMING, WINDOWING_BLACKMANN, WINDOWING_LANCZOS, WINDOWING_WELCH, WINDOWING_RECT, WINDOWING_NUTTALL, WINDOWING_FLAT_TOP, WINDOWING_KAISER25. (WindowingType)
  • direction - FFT direction. Expected values: TRANSFORM_DIRECTION_BACKWARD, TRANSFORM_DIRECTION_FORWARD. (TransformDirection)
  • interpolation - Interpolation type. Expected values: INTERPOLATION_NONE, INTERPOLATION_ROUND, INTERPOLATION_LINEAR, INTERPOLATION_BILINEAR, INTERPOLATION_KEY, INTERPOLATION_BSPLINE, INTERPOLATION_OMOMS, INTERPOLATION_NNA, INTERPOLATION_SCHAUM. (InterpolationType)
  • preserverms - True to preserve RMS while windowing. (bool)
  • level - 0 to perform no leveling, 1 to subtract mean value, 2 to subtract plane (the number can be interpreted as the first polynomial degree to keep, but only the enumerated three values are available). (int)

area_2dfft(iin, rout, iout, col, row, width, height, windowing, direction, interpolation, preserverms, level)

source code 

Calculates 2D Fast Fourier Transform of a rectangular area of a data field.

If requested a windowing and/or leveling is applied to preprocess data to obtain reasonable results.

Parameters:
  • iin - Imaginary input data field. It can be None for real-to-complex transform which can be somewhat faster than complex-to-complex transform. (DataField)
  • rout - Real output data field, it will be resized to area size. (DataField)
  • iout - Imaginary output data field, it will be resized to area size. (DataField)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns), must be at least 2. (int)
  • height - Area height (number of rows), must be at least 2. (int)
  • windowing - Windowing type. Expected values: WINDOWING_NONE, WINDOWING_HANN, WINDOWING_HAMMING, WINDOWING_BLACKMANN, WINDOWING_LANCZOS, WINDOWING_WELCH, WINDOWING_RECT, WINDOWING_NUTTALL, WINDOWING_FLAT_TOP, WINDOWING_KAISER25. (WindowingType)
  • direction - FFT direction. Expected values: TRANSFORM_DIRECTION_BACKWARD, TRANSFORM_DIRECTION_FORWARD. (TransformDirection)
  • interpolation - Interpolation type. Ignored since 2.8 as no resampling is performed. Expected values: INTERPOLATION_NONE, INTERPOLATION_ROUND, INTERPOLATION_LINEAR, INTERPOLATION_BILINEAR, INTERPOLATION_KEY, INTERPOLATION_BSPLINE, INTERPOLATION_OMOMS, INTERPOLATION_NNA, INTERPOLATION_SCHAUM. (InterpolationType)
  • preserverms - True to preserve RMS while windowing. (bool)
  • level - 0 to perform no leveling, 1 to subtract mean value, 2 to subtract plane (the number can be interpreted as the first polynomial degree to keep, but only the enumerated three values are available). (int)

fft2d_raw(iin, rout, iout, direction)

source code 

Calculates 2D Fast Fourier Transform of a data field.

No leveling, windowing nor scaling is performed.

The normalisation of FFT is symmetrical, so transformations in both directions are unitary.

Since 2.8 the dimensions need not to be from the set of sizes returned by gwy_fft_find_nice_size().

Lateral dimensions, offsets and units are unchanged. See DataField.fft_postprocess() for that.

Since 2.53 iout can be None for complex-to-real transforms. Note that this means Hermitean symmetry of the input data is assumed, i.e. about half of the input is ignored. If you want to extract the real part of a complex transform, you must pass a non-None iout.

Parameters:
  • iin - Imaginary input data field. It can be None for real-to-complex transform. (DataField)
  • rout - Real output data field, it will be resized to rin size. (DataField)
  • iout - Imaginary output data field, it will be resized to rin size. (DataField)
  • direction - FFT direction. It should be TRANSFORM_DIRECTION_FORWARD for real-to-complex transforms and TRANSFORM_DIRECTION_BACKWARD for complex-to-real transforms. Expected values: TRANSFORM_DIRECTION_BACKWARD, TRANSFORM_DIRECTION_FORWARD. (TransformDirection)

Since: 2.1

fft2d_humanize()

source code 

Rearranges 2D FFT output to a human-friendly form.

Top-left, top-right, bottom-left and bottom-right sub-rectangles are swapped to obtain a humanized 2D FFT output with (0,0) in the centre.

More precisely, for even field dimensions the equally-sized blocks starting with the Nyquist frequency and with the zero frequency (constant component) will exchange places. For odd field dimensions, the block containing the zero frequency is one item larger and the constant component will actually end up in the exact centre.

Also note if both dimensions are even, this function is involutory and identical to DataField.fft2d_dehumanize(). However, if any dimension is odd, DataField.fft2d_humanize() and DataField.fft2d_dehumanize() are different, therefore they must be paired properly.

fft2d_dehumanize()

source code 

Rearranges 2D FFT output back from the human-friendly form.

Top-left, top-right, bottom-left and bottom-right sub-rectangles are swapped to reshuffle a humanized 2D FFT output back into the natural positions.

See DataField.fft2d_humanize() for discussion.

Since: 2.8

fft_postprocess(humanize)

source code 

Updates units, dimensions and offsets for a 2D FFT-processed field.

The field is expected to have dimensions and units of the original direct-space data. The lateral units and resolutions are updated to correspond to its Fourier transform.

The real dimensions are set for spatial frequencies, not wavevectors. For wavevector lateral coordinates, mutiply all real dimensions and offsets by 2*glib.PI.

If humanize is True DataField.fft2d_humanize() is applied to the field data and the lateral offsets are set accordingly. Otherwise the offsets are cleared.

Value units are kept intact.

Parameters:
  • humanize - True to rearrange data to have the frequency origin in the centre. (bool)

Since: 2.38

fft_filter_1d(result_field, weights, orientation, interpolation)

source code 

Performs 1D FFT filtering of a data field.

Parameters:
  • result_field - A data field to store the result to. It will be resampled to data_field's size. (DataField)
  • weights - Filter weights for the lower half of the spectrum (the other half is symmetric). Its size can be arbitrary, it will be interpolated. (DataLine)
  • orientation - Filter direction. Expected values: ORIENTATION_HORIZONTAL, ORIENTATION_VERTICAL. (Orientation)
  • interpolation - The interpolation to use for resampling. Expected values: INTERPOLATION_NONE, INTERPOLATION_ROUND, INTERPOLATION_LINEAR, INTERPOLATION_BILINEAR, INTERPOLATION_KEY, INTERPOLATION_BSPLINE, INTERPOLATION_OMOMS, INTERPOLATION_NNA, INTERPOLATION_SCHAUM. (InterpolationType)

fft_window(windowing)

source code 

Performs two-dimensional windowing of a data field in preparation for 2D FFT.

The same windowing function is used row-wise and column-wise.

Parameters:
  • windowing - Windowing type to use. Expected values: WINDOWING_NONE, WINDOWING_HANN, WINDOWING_HAMMING, WINDOWING_BLACKMANN, WINDOWING_LANCZOS, WINDOWING_WELCH, WINDOWING_RECT, WINDOWING_NUTTALL, WINDOWING_FLAT_TOP, WINDOWING_KAISER25. (WindowingType)

Since: 2.62

fft_window_1d(orientation, windowing)

source code 

Performs row-wise or column-wise windowing of a data field in preparation for 1D FFT.

Parameters:
  • orientation - Windowing orientation (the same as corresponding FFT orientation). Expected values: ORIENTATION_HORIZONTAL, ORIENTATION_VERTICAL. (Orientation)
  • windowing - Windowing type to use. Expected values: WINDOWING_NONE, WINDOWING_HANN, WINDOWING_HAMMING, WINDOWING_BLACKMANN, WINDOWING_LANCZOS, WINDOWING_WELCH, WINDOWING_RECT, WINDOWING_NUTTALL, WINDOWING_FLAT_TOP, WINDOWING_KAISER25. (WindowingType)

Since: 2.62

cwt(interpolation, scale, wtype)

source code 

Computes a continuous wavelet transform (CWT) at given scale and using given wavelet.

@param interpolation:  Interpolation type. Ignored since 2.8 as no resampling is performed.  Expected values: C{B{INTERPOLATION_NONE}}, C{B{INTERPOLATION_ROUND}}, C{B{INTERPOLATION_LINEAR}}, C{B{INTERPOLATION_BILINEAR}}, C{B{INTERPOLATION_KEY}}, C{B{INTERPOLATION_BSPLINE}}, C{B{INTERPOLATION_OMOMS}}, C{B{INTERPOLATION_NNA}}, C{B{INTERPOLATION_SCHAUM}}. I{(L{InterpolationType})}
@param scale:  Wavelet scale. I{(float)}
@param wtype:  Wavelet type.  Expected values: C{B{2DCWT_GAUSS}}, C{B{2DCWT_HAT}}. I{(L{2DCWTWaveletType})}
        

area_fit_plane(mask, col, row, width, height)

source code 

Fits a plane through a rectangular part of a data field.

The coefficients can be used for plane leveling using the same relation as in DataField.fit_plane(), counting indices from area top left corner.

Parameters:
  • mask - Mask of values to take values into account, or None for full data_field. Values equal to 0.0 and below cause corresponding data_field samples to be ignored, values equal to 1.0 and above cause inclusion of corresponding data_field samples. The behaviour for values inside (0.0, 1.0) is undefined (it may be specified in the future). (DataField)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
Returns:
Tuple consisting of 3 values (pa, pbx, pby). ((float), (float), (float))

fit_plane()

source code 

Fits a plane through a data field.

The coefficients can be used for plane leveling using relation data[i] := data[i] - (pa + pby*i + pbx*j);

Returns:
Tuple consisting of 3 values (pa, pbx, pby). ((float), (float), (float))

fit_facet_plane(mfield, masking)

source code 

Calculates the inclination of a plane close to the dominant plane in a data field.

The dominant plane is determined by taking into account larger local slopes with exponentially smaller weight.

This is the basis of so-called facet levelling algorithm. Usually, the plane found by this method is subtracted using DataField.plane_level() and the entire process is repeated until it converges. A convergence criterion may be sufficiently small values of the x and y plane coefficients. Note that since DataField.plane_level() uses pixel-based lateral coordinates, the coefficients must be divided by DataField.get_dx(data_field) and DataField.get_dy(data_field) to obtain physical plane coefficients.

Parameters:
  • mfield - Mask specifying which values to take into account/exclude, or None. (DataField)
  • masking - Masking mode to use. Expected values: MASK_EXCLUDE, MASK_INCLUDE, MASK_IGNORE. (MaskingType)
Returns:
Tuple consisting of 4 values (value, pa, pbx, pby). ((bool), (float), (float), (float))

Since: 2.37

plane_level(a, bx, by)

source code 

Subtracts plane from a data field.

See DataField.fit_plane() for details.

Parameters:
  • a - Constant coefficient. (float)
  • bx - X plane coefficient. (float)
  • by - Y plane coefficient. (float)

plane_rotate(xangle, yangle, interpolation)

source code 

Performs rotation of plane along x and y axis.

Parameters:
  • xangle - Rotation angle in x direction (rotation along y axis, in radians). (float)
  • yangle - Rotation angle in y direction (rotation along x axis, in radians). (float)
  • interpolation - Interpolation type (can be only of two-point type). Expected values: INTERPOLATION_NONE, INTERPOLATION_ROUND, INTERPOLATION_LINEAR, INTERPOLATION_BILINEAR, INTERPOLATION_KEY, INTERPOLATION_BSPLINE, INTERPOLATION_OMOMS, INTERPOLATION_NNA, INTERPOLATION_SCHAUM. (InterpolationType)

fit_lines(col, row, width, height, degree, exclude, orientation)

source code 

Independently levels profiles on each row/column in a data field.

Lines that have no intersection with area selected by ulcol, ulrow, brcol, brrow are always leveled as a whole. Lines that have intersection with selected area, are leveled using polynomial coefficients computed only from data inside (or outside for exclude = True) the area.

Parameters:
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • degree - Fitted polynomial degree. (int)
  • exclude - If True, outside of area selected by ulcol, ulrow, brcol, brrow will be used for polynomial coefficients computation, instead of inside. (bool)
  • orientation - Line orientation. Expected values: ORIENTATION_HORIZONTAL, ORIENTATION_VERTICAL. (Orientation)

area_local_plane_quantity(size, col, row, width, height, type, result)

source code 

Convenience function to get just one quantity from DataField.area_fit_local_planes().

Parameters:
  • size - Neighbourhood size. (int)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • type - The type of requested quantity. Expected values: PLANE_FIT_A, PLANE_FIT_BX, PLANE_FIT_BY, PLANE_FIT_ANGLE, PLANE_FIT_SLOPE, PLANE_FIT_S0, PLANE_FIT_S0_REDUCED. (PlaneFitQuantity)
  • result - A data field to store result to, or None to allocate a new one. (DataField)
Returns:
result if it isn't None, otherwise a newly allocated data field. (DataField)

local_plane_quantity(size, type, result)

source code 

Convenience function to get just one quantity from DataField.fit_local_planes().

Parameters:
  • size - Neighbourhood size. (int)
  • type - The type of requested quantity. Expected values: PLANE_FIT_A, PLANE_FIT_BX, PLANE_FIT_BY, PLANE_FIT_ANGLE, PLANE_FIT_SLOPE, PLANE_FIT_S0, PLANE_FIT_S0_REDUCED. (PlaneFitQuantity)
  • result - A data field to store result to, or None to allocate a new one. (DataField)
Returns:
result if it isn't None, otherwise a newly allocated data field. (DataField)

mfm_perpendicular_stray_field(out, height, thickness, sigma, walls, wall_delta)

source code 

Calculates stray field for perpendicular media, based on a mask showing the magnetisation orientation.

Parameters:
  • out - Target data field to put the result to. It will be resized to match mfield. (DataField)
  • height - Height above the surface. (float)
  • thickness - Film thickness. (float)
  • sigma - Magnetic charge. (float)
  • walls - Include domain walls. (bool)
  • wall_delta - Domain wall thickness (float)

Since: 2.51

mfm_perpendicular_stray_field_angle_correction(angle, orientation)

source code 

Performs correction of magnetic data for cantilever tilt.

Parameters:
  • angle - (float)
  • orientation - Cantilever orientation with respect of the data. Expected values: ORIENTATION_HORIZONTAL, ORIENTATION_VERTICAL. (Orientation)

Since: 2.54

mfm_perpendicular_medium_force(fz, type, mtip, bx, by, length)

source code 

Calculates force as evaluated from z-component of the magnetic field for a given probe type.

Parameters:
  • fz - Target data field to put the result to. It will be resized to match hz. (DataField)
  • type - Probe type. Expected values: MFM_PROBE_CHARGE, MFM_PROBE_BAR. (MFMProbeType)
  • mtip - Probe magnetic moment. (float)
  • bx - x size for parallelpiped probe. (float)
  • by - y size for parallelpiped probe. (float)
  • length - length (z size) for parallelpiped probe. (float)

Since: 2.51

mfm_shift_z(out, zdiff)

source code 

Shifts magnetic field to a different lift height above the surface.

Positive zdiff means away from the measured surface and blurring the data. Negative zdiff means shifting towards (or within) the measured surface and sharpening the data. For negative zdiff the result grows exponentially and is generally not very useful.

Parameters:
  • out - Target data field to put the result to. (DataField)
  • zdiff - The shift distance in physical units. (float)

Since: 2.51

mfm_find_shift_z(shifted, zdiffmin, zdiffmax)

source code 

Estimates the height difference between two magnetic field images.

See DataField.mfm_shift_z() for the sign convention. It is generally only meaningful to estimate the shift whe shifted was measured at larger lift height than dfield.

Parameters:
  • shifted - Data field containing magnetic field component measured at a different lift height. (DataField)
  • zdiffmin - Start of shift scan range. (float)
  • zdiffmax - Start of shift scan range. (float)
Returns:
The estimated shift between shifted and dfield. (float)

Since: 2.51

mfm_parallel_medium(height, size_a, size_b, size_c, magnetisation, thickness, component)

source code 

Calculates magnetic field or its derivatives above a simple medium consisting of stripes of left and right direction magnetisation. Results are added to the hfield array, so it should be cleared if function is run only once.

Parameters:
  • height - Height above surface. (float)
  • size_a - Left direction oriented area width. (float)
  • size_b - Right direction orientated area width. (float)
  • size_c - Gap size. (float)
  • magnetisation - Remanent magnetisation. (float)
  • thickness - Film thickness. (float)
  • component - Component to output. Expected values: MFM_COMPONENT_HX, MFM_COMPONENT_HY, MFM_COMPONENT_HZ, MFM_COMPONENT_DHZ_DZ, MFM_COMPONENT_D2HZ_DZ2. (MFMComponentType)

Since: 2.51

mfm_current_line(height, width, position, current, component)

source code 

Calculates magnetic field or its derivatives above a flat current line (stripe). Results are added to the hfield array, so it should be cleared if function is run only once.

Parameters:
  • height - Height above surface. (float)
  • width - Current line width. (float)
  • position - Current line x position in the resulting array. (float)
  • current - Curent passing through the line. (float)
  • component - Component to output. Expected values: MFM_COMPONENT_HX, MFM_COMPONENT_HY, MFM_COMPONENT_HZ, MFM_COMPONENT_DHZ_DZ, MFM_COMPONENT_D2HZ_DZ2. (MFMComponentType)

Since: 2.51

get_max()

source code 

Finds the maximum value of a data field.

This quantity is cached.

Returns:
The maximum value. (float)

get_min()

source code 

Finds the minimum value of a data field.

This quantity is cached.

Returns:
The minimum value. (float)

get_min_max()

source code 

Finds minimum and maximum values of a data field.

Returns:
Tuple consisting of 2 values (min, max). ((float), (float))

get_avg()

source code 

Computes average value of a data field.

This quantity is cached.

Returns:
The average value. (float)

get_rms()

source code 

Computes root mean square value of a data field.

The root mean square value is calculated with respect to the mean value. See DataField.get_mean_square() for a similar function which does not subtract the mean value.

This quantity is cached.

Returns:
The root mean square value. (float)

get_mean_square()

source code 

Computes mean square value of a data field.

See DataField.area_get_mean_square() for remarks.

Returns:
The mean square value. (float)

Since: 2.52

get_sum()

source code 

Sums all values in a data field.

This quantity is cached.

Returns:
The sum of all values. (float)

get_median()

source code 

Computes median value of a data field.

This quantity is cached.

Returns:
The median value. (float)

get_surface_area()

source code 

Computes surface area of a data field.

This quantity is cached.

Returns:
The surface area. (float)

get_surface_slope()

source code 

Computes root mean square surface slope (Sdq) of a data field.

Returns:
The root mean square surface slope. (float)

Since: 2.58

get_variation()

source code 

Computes the total variation of a data field.

See DataField.area_get_variation() for the definition.

This quantity is cached.

Returns:
The variation. (float)

Since: 2.38

get_entropy()

source code 

Computes the entropy of a data field.

See DataField.area_get_entropy() for the definition.

This quantity is cached.

Returns:
The value distribution entropy. (float)

Since: 2.42

get_entropy_2d(yfield)

source code 

Computes the entropy of a two-dimensional point cloud.

Each pair of corresponding xfield and yfield pixels is assumed to represent the coordinates (x,y) of a point in plane. Hence they must have the same dimensions.

Parameters:
  • yfield - A data field containing the y-coordinates. (DataField)
Returns:
The two-dimensional distribution entropy. (float)

Since: 2.44

area_get_max(mask, col, row, width, height)

source code 

Finds the maximum value in a rectangular part of a data field.

Parameters:
  • mask - Mask specifying which values to take into account/exclude, or None. (DataField)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
Returns:
The maximum value. When the number of samples to calculate maximum of is zero, -glib.MAXDOUBLE is returned. (float)

area_get_min(mask, col, row, width, height)

source code 

Finds the minimum value in a rectangular part of a data field.

Parameters:
  • mask - Mask specifying which values to take into account/exclude, or None. (DataField)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
Returns:
The minimum value. When the number of samples to calculate minimum of is zero, -glib.MAXDOUBLE is returned. (float)

area_get_min_max(mask, col, row, width, height)

source code 

Finds minimum and maximum values in a rectangular part of a data field.

This function is equivalent to calling DataField.area_get_min_max_mask() with masking mode MASK_INCLUDE.

Parameters:
  • mask - Mask specifying which values to take into account/exclude, or None. (DataField)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
Returns:
Tuple consisting of 2 values (min, max). ((float), (float))

area_get_min_max_mask(mask, mode, col, row, width, height)

source code 

Finds minimum and maximum values in a rectangular part of a data field.

Parameters:
  • mask - Mask specifying which values to take into account/exclude, or None. (DataField)
  • mode - Masking mode to use. See the introduction for description of masking modes. Expected values: MASK_EXCLUDE, MASK_INCLUDE, MASK_IGNORE. (MaskingType)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
Returns:
Tuple consisting of 2 values (min, max). ((float), (float))

Since: 2.18

area_get_avg(mask, col, row, width, height)

source code 

Computes average value of a rectangular part of a data field.

This function is equivalent to calling DataField.area_get_avg_mask() with masking mode MASK_INCLUDE.

Parameters:
  • mask - Mask specifying which values to take into account/exclude, or None. (DataField)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
Returns:
The average value. (float)

area_get_avg_mask(mask, mode, col, row, width, height)

source code 

Computes average value of a rectangular part of a data field.

Parameters:
  • mask - Mask specifying which values to take into account/exclude, or None. (DataField)
  • mode - Masking mode to use. See the introduction for description of masking modes. Expected values: MASK_EXCLUDE, MASK_INCLUDE, MASK_IGNORE. (MaskingType)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
Returns:
The average value. (float)

Since: 2.18

area_get_rms(mask, col, row, width, height)

source code 

Computes root mean square value of a rectangular part of a data field.

The root mean square value is calculated with respect to the mean value.

This function is equivalent to calling DataField.area_get_rms_mask() with masking mode MASK_INCLUDE.

Parameters:
  • mask - Mask specifying which values to take into account/exclude, or None. (DataField)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
Returns:
The root mean square value. (float)

area_get_rms_mask(mask, mode, col, row, width, height)

source code 

Computes root mean square value of deviations of a rectangular part of a data field.

The root mean square value is calculated with respect to the mean value.

Parameters:
  • mask - Mask specifying which values to take into account/exclude, or None. (DataField)
  • mode - Masking mode to use. See the introduction for description of masking modes. Expected values: MASK_EXCLUDE, MASK_INCLUDE, MASK_IGNORE. (MaskingType)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
Returns:
The root mean square value of deviations from the mean value. (float)

Since: 2.18

area_get_grainwise_rms(mask, mode, col, row, width, height)

source code 

Computes grain-wise root mean square value of deviations of a rectangular part of a data field.

Grain-wise means that the mean value is determined for each grain (i.e. cotinguous part of the mask or inverted mask) separately and the deviations are calculated from these mean values.

Parameters:
  • mask - Mask specifying which values to take into account/exclude, or None. (DataField)
  • mode - Masking mode to use. See the introduction for description of masking modes. Expected values: MASK_EXCLUDE, MASK_INCLUDE, MASK_IGNORE. (MaskingType)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
Returns:
The root mean square value of deviations from the mean value. (float)

Since: 2.29

area_get_sum(mask, col, row, width, height)

source code 

Sums values of a rectangular part of a data field.

This function is equivalent to calling DataField.area_get_sum_mask() with masking mode MASK_INCLUDE.

Parameters:
  • mask - Mask specifying which values to take into account/exclude, or None. (DataField)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
Returns:
The sum of all values inside area. (float)

area_get_sum_mask(mask, mode, col, row, width, height)

source code 

Sums values of a rectangular part of a data field.

Parameters:
  • mask - Mask specifying which values to take into account/exclude, or None. (DataField)
  • mode - Masking mode to use. See the introduction for description of masking modes. Expected values: MASK_EXCLUDE, MASK_INCLUDE, MASK_IGNORE. (MaskingType)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
Returns:
The sum of all values inside area. (float)

Since: 2.18

area_get_median(mask, col, row, width, height)

source code 

Computes median value of a data field area.

This function is equivalent to calling DataField.area_get_median_mask() with masking mode MASK_INCLUDE.

Parameters:
  • mask - Mask specifying which values to take into account/exclude, or None. (DataField)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
Returns:
The median value. (float)

area_get_median_mask(mask, mode, col, row, width, height)

source code 

Computes median value of a data field area.

Parameters:
  • mask - Mask specifying which values to take into account/exclude, or None. (DataField)
  • mode - Masking mode to use. See the introduction for description of masking modes. Expected values: MASK_EXCLUDE, MASK_INCLUDE, MASK_IGNORE. (MaskingType)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
Returns:
The median value. (float)

Since: 2.18

area_get_surface_area(mask, col, row, width, height)

source code 

Computes surface area of a rectangular part of a data field.

This function is equivalent to calling DataField.area_get_surface_area_mask() with masking mode MASK_INCLUDE.

Parameters:
  • mask - Mask specifying which values to take into account/exclude, or None. (DataField)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
Returns:
The surface area. (float)

area_get_surface_area_mask(mask, mode, col, row, width, height)

source code 

Computes surface area of a rectangular part of a data field.

This quantity makes sense only if the lateral dimensions and values of data_field are the same physical quantities.

Parameters:
  • mask - Mask specifying which values to take into account/exclude, or None. (DataField)
  • mode - Masking mode to use. See the introduction for description of masking modes. Expected values: MASK_EXCLUDE, MASK_INCLUDE, MASK_IGNORE. (MaskingType)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
Returns:
The surface area. (float)

Since: 2.18

area_get_surface_slope_mask(mask, mode, col, row, width, height)

source code 

Computes root mean square surface slope (Sdq) of a rectangular part of a data field.

Parameters:
  • mask - Mask specifying which values to take into account/exclude, or None. (DataField)
  • mode - Masking mode to use. See the introduction for description of masking modes. Expected values: MASK_EXCLUDE, MASK_INCLUDE, MASK_IGNORE. (MaskingType)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
Returns:
The root mean square surface slope. (float)

Since: 2.58

area_get_mean_square(mask, mode, col, row, width, height)

source code 

Computes mean square value of a rectangular part of a data field.

Unlike DataField.get_rms(), this function does not subtract the mean value beforehand. Therefore, it is useful to sum the squared values of data fields which can have the zero level set differently, for instance when the field contains a distribution.

Parameters:
  • mask - Mask specifying which values to take into account/exclude, or None. (DataField)
  • mode - Masking mode to use. See the introduction for description of masking modes. Expected values: MASK_EXCLUDE, MASK_INCLUDE, MASK_IGNORE. (MaskingType)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
Returns:
The mean square value. (float)

Since: 2.52

area_get_entropy_at_scales(target_line, mask, mode, col, row, width, height, maxdiv)

source code 

Calculates estimates of value distribution entropy at various scales.

Parameters:
  • target_line - A data line to store the result to. It will be resampled to maxdiv+1 items. (DataLine)
  • mask - Mask specifying which values to take into account/exclude, or None. (DataField)
  • mode - Masking mode to use. See the introduction for description of masking modes. Expected values: MASK_EXCLUDE, MASK_INCLUDE, MASK_IGNORE. (MaskingType)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • maxdiv - Maximum number of divisions of the value range. Pass zero to choose it automatically. (int)
Returns:
The best estimate, as DataField.area_get_entropy(). (float)

Since: 2.44

get_entropy_2d_at_scales(yfield, target_line, maxdiv)

source code 

Calculates estimates of entropy of two-dimensional point cloud at various scales.

Parameters:
  • yfield - A data field containing the y-coordinates. (DataField)
  • target_line - A data line to store the result to. It will be resampled to maxdiv+1 items. (DataLine)
  • maxdiv - Maximum number of divisions of the value range. Pass zero to choose it automatically. (int)
Returns:
The best estimate, as DataField.get_entropy_2d(). (float)

Since: 2.44

area_get_variation(mask, mode, col, row, width, height)

source code 

Computes the total variation of a rectangular part of a data field.

The total variation is estimated as the integral of the absolute value of local gradient.

This quantity has the somewhat odd units of value unit times lateral unit. It can be envisioned as follows. If the surface has just two height levels (upper and lower planes) then the quantity is the length of the boundary between the upper and lower part, multiplied by the step height. If the surface is piece-wise constant, then the variation is the step height integrated along the boundaries between the constant parts. Therefore, for non-fractal surfaces it scales with the linear dimension of the image, not with its area, despite being an area integral.

Parameters:
  • mask - Mask specifying which values to take into account/exclude, or None. (DataField)
  • mode - Masking mode to use. See the introduction for description of masking modes. Expected values: MASK_EXCLUDE, MASK_INCLUDE, MASK_IGNORE. (MaskingType)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
Returns:
The variation. (float)

Since: 2.38

area_get_entropy(mask, mode, col, row, width, height)

source code 

Estimates the entropy of field data distribution.

The estimate is calculated as S = ln(n Δ) − 1/n ∑ n_i ln(n_i), where n is the number of pixels considered, Δ the bin size and n_i the count in the i-th bin. If S is plotted as a function of the bin size Δ, it is, generally, a growing function with a plateau for ‘reasonable’ bin sizes. The estimate is taken at the plateau. If no plateau is found, which means the distribution is effectively a sum of δ-functions, -glib.MAXDOUBLE is returned.

It should be noted that this estimate may be biased.

Parameters:
  • mask - Mask specifying which values to take into account/exclude, or None. (DataField)
  • mode - Masking mode to use. See the introduction for description of masking modes. Expected values: MASK_EXCLUDE, MASK_INCLUDE, MASK_IGNORE. (MaskingType)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
Returns:
The estimated entropy of the data values. The entropy of no data or a single single is returned as -glib.MAXDOUBLE. (float)

Since: 2.42

area_get_volume(basis, mask, col, row, width, height)

source code 

Computes volume of a rectangular part of a data field.

Parameters:
  • basis - The basis or background for volume calculation if not None. The height of each vertex is then the difference between data_field value and basis value. Value None is the same as passing all zeroes for the basis. (DataField)
  • mask - Mask specifying which values to take into account/exclude, or None. (DataField)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
Returns:
The volume. (float)

Since: 2.3

get_autorange()

source code 

Computes data field value range with outliers cut-off.

The purpose of this function is to find a range is suitable for false color mapping. The precise method how it is calculated is unspecified and may be subject to changes.

However, it is guaranteed minimum <= from <= to <= maximum.

This quantity is cached.

Returns:
Tuple consisting of 2 values (from_, to). ((float), (float))

get_stats()

source code 

Computes basic statistical quantities of a data field.

Note the kurtosis returned by this function returns is the excess kurtosis which is zero for the Gaussian distribution (not 3).

Returns:
Tuple consisting of 5 values (avg, ra, rms, skew, kurtosis). ((float), (float), (float), (float), (float))

area_get_stats(mask, col, row, width, height)

source code 

Computes basic statistical quantities of a rectangular part of a data field.

This function is equivalent to calling DataField.area_get_stats_mask() with masking mode MASK_INCLUDE.

Note the kurtosis returned by this function returns is the excess kurtosis which is zero for the Gaussian distribution (not 3).

Parameters:
  • mask - Mask specifying which values to take into account/exclude, or None. (DataField)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
Returns:
Tuple consisting of 5 values (avg, ra, rms, skew, kurtosis). ((float), (float), (float), (float), (float))

area_get_stats_mask(mask, mode, col, row, width, height)

source code 

Computes basic statistical quantities of a rectangular part of a data field.

Note the kurtosis returned by this function returns is the excess kurtosis which is zero for the Gaussian distribution (not 3).

Parameters:
  • mask - Mask specifying which values to take into account/exclude, or None. (DataField)
  • mode - Masking mode to use. See the introduction for description of masking modes. Expected values: MASK_EXCLUDE, MASK_INCLUDE, MASK_IGNORE. (MaskingType)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
Returns:
Tuple consisting of 5 values (avg, ra, rms, skew, kurtosis). ((float), (float), (float), (float), (float))

Since: 2.18

area_count_in_range(mask, col, row, width, height, below, above)

source code 

Counts data samples in given range.

No assertion is made about the values of above and below, in other words above may be larger than below. To count samples in an open interval instead of a closed interval, exchange below and above and then subtract the nabove and nbelow from width*height to get the complementary counts.

With this trick the common task of counting positive values can be realized: <informalexample><programlisting> DataField.area_count_in_range(data_field, None, col, row, width, height, 0.0, 0.0, &amp;count, None); count = width*height - count; </programlisting></informalexample>

Parameters:
  • mask - Mask specifying which values to take into account, or None. (DataField)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • below - Upper bound to compare data to. The number of samples less than or equal to below is stored in nbelow. (float)
  • above - Lower bound to compare data to. The number of samples greater than or equal to above is stored in nabove. (float)
Returns:
Tuple consisting of 2 values (nbelow, nabove). ((int), (int))

area_dh(mask, target_line, col, row, width, height, nstats)

source code 

Calculates distribution of heights in a rectangular part of data field.

Parameters:
  • mask - Mask specifying which values to take into account, or None. (DataField)
  • target_line - A data line to store the distribution to. It will be resampled to requested width. (DataLine)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • nstats - The number of samples to take on the distribution function. If nonpositive, a suitable resolution is determined automatically. (int)

dh(target_line, nstats)

source code 

Calculates distribution of heights in a data field.

Parameters:
  • target_line - A data line to store the distribution to. It will be resampled to requested width. (DataLine)
  • nstats - The number of samples to take on the distribution function. If nonpositive, a suitable resolution is determined automatically. (int)

area_cdh(mask, target_line, col, row, width, height, nstats)

source code 

Calculates cumulative distribution of heights in a rectangular part of data field.

Parameters:
  • mask - Mask specifying which values to take into account, or None. (DataField)
  • target_line - A data line to store the distribution to. It will be resampled to requested width. (DataLine)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • nstats - The number of samples to take on the distribution function. If nonpositive, a suitable resolution is determined automatically. (int)

cdh(target_line, nstats)

source code 

Calculates cumulative distribution of heights in a data field.

Parameters:
  • target_line - A data line to store the distribution to. It will be resampled to requested width. (DataLine)
  • nstats - The number of samples to take on the distribution function. If nonpositive, a suitable resolution is determined automatically. (int)

area_da(target_line, col, row, width, height, orientation, nstats)

source code 

Calculates distribution of slopes in a rectangular part of data field.

Parameters:
  • target_line - A data line to store the distribution to. It will be resampled to requested width. (DataLine)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • orientation - Orientation to compute the slope distribution in. Expected values: ORIENTATION_HORIZONTAL, ORIENTATION_VERTICAL. (Orientation)
  • nstats - The number of samples to take on the distribution function. If nonpositive, a suitable resolution is determined automatically. (int)

area_da_mask(mask, target_line, col, row, width, height, orientation, nstats)

source code 

Calculates distribution of slopes in a rectangular part of data field, with masking.

Parameters:
  • mask - Mask specifying which values to take into account, or None. (DataField)
  • target_line - A data line to store the distribution to. It will be resampled to requested width. (DataLine)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • orientation - Orientation to compute the slope distribution in. Expected values: ORIENTATION_HORIZONTAL, ORIENTATION_VERTICAL. (Orientation)
  • nstats - The number of samples to take on the distribution function. If nonpositive, a suitable resolution is determined automatically. (int)

Since: 2.49

da(target_line, orientation, nstats)

source code 

Calculates distribution of slopes in a data field.

Parameters:
  • target_line - A data line to store the distribution to. It will be resampled to requested width. (DataLine)
  • orientation - Orientation to compute the slope distribution in. Expected values: ORIENTATION_HORIZONTAL, ORIENTATION_VERTICAL. (Orientation)
  • nstats - The number of samples to take on the distribution function. If nonpositive, a suitable resolution is determined automatically. (int)

area_cda(target_line, col, row, width, height, orientation, nstats)

source code 

Calculates cumulative distribution of slopes in a rectangular part of data field.

Parameters:
  • target_line - A data line to store the distribution to. It will be resampled to requested width. (DataLine)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • orientation - Orientation to compute the slope distribution in. Expected values: ORIENTATION_HORIZONTAL, ORIENTATION_VERTICAL. (Orientation)
  • nstats - The number of samples to take on the distribution function. If nonpositive, a suitable resolution is determined automatically. (int)

area_cda_mask(mask, target_line, col, row, width, height, orientation, nstats)

source code 

Calculates cumulative distribution of slopes in a rectangular part of data field, with masking.

Parameters:
  • mask - Mask specifying which values to take into account, or None. (DataField)
  • target_line - A data line to store the distribution to. It will be resampled to requested width. (DataLine)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • orientation - Orientation to compute the slope distribution in. Expected values: ORIENTATION_HORIZONTAL, ORIENTATION_VERTICAL. (Orientation)
  • nstats - The number of samples to take on the distribution function. If nonpositive, a suitable resolution is determined automatically. (int)

Since: 2.49

cda(target_line, orientation, nstats)

source code 

Calculates cumulative distribution of slopes in a data field.

Parameters:
  • target_line - A data line to store the distribution to. It will be resampled to requested width. (DataLine)
  • orientation - Orientation to compute the slope distribution in. Expected values: ORIENTATION_HORIZONTAL, ORIENTATION_VERTICAL. (Orientation)
  • nstats - The number of samples to take on the distribution function. If nonpositive, a suitable resolution is determined automatically. (int)

area_acf(target_line, col, row, width, height, orientation, interpolation, nstats)

source code 

Calculates one-dimensional autocorrelation function of a rectangular part of a data field.

Parameters:
  • target_line - A data line to store the distribution to. It will be resampled to requested width. (DataLine)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • orientation - Orientation of lines (ACF is simply averaged over the other orientation). Expected values: ORIENTATION_HORIZONTAL, ORIENTATION_VERTICAL. (Orientation)
  • interpolation - Interpolation to use when nstats is given and requires resampling. Expected values: INTERPOLATION_NONE, INTERPOLATION_ROUND, INTERPOLATION_LINEAR, INTERPOLATION_BILINEAR, INTERPOLATION_KEY, INTERPOLATION_BSPLINE, INTERPOLATION_OMOMS, INTERPOLATION_NNA, INTERPOLATION_SCHAUM. (InterpolationType)
  • nstats - The number of samples to take on the distribution function. If nonpositive, width (height) is used. (int)

acf(target_line, orientation, interpolation, nstats)

source code 

Calculates one-dimensional autocorrelation function of a data field.

Parameters:
  • target_line - A data line to store the distribution to. It will be resampled to requested width. (DataLine)
  • orientation - Orientation of lines (ACF is simply averaged over the other orientation). Expected values: ORIENTATION_HORIZONTAL, ORIENTATION_VERTICAL. (Orientation)
  • interpolation - Interpolation to use when nstats is given and requires resampling. Expected values: INTERPOLATION_NONE, INTERPOLATION_ROUND, INTERPOLATION_LINEAR, INTERPOLATION_BILINEAR, INTERPOLATION_KEY, INTERPOLATION_BSPLINE, INTERPOLATION_OMOMS, INTERPOLATION_NNA, INTERPOLATION_SCHAUM. (InterpolationType)
  • nstats - The number of samples to take on the distribution function. If nonpositive, data field width (height) is used. (int)

area_row_acf(mask, masking, col, row, width, height, level, weights)

source code 

Calculates the row-wise autocorrelation function (ACF) of a field.

The calculated ACF has the natural number of points, i.e. width.

Masking is performed by omitting all terms that contain excluded pixels. Since different rows contain different numbers of pixels, the resulting ACF values are calculated as a weighted sums where weight of each row's contribution is proportional to the number of contributing terms. In other words, the weighting is fair: each contributing pixel has the same influence on the result.

Only level values 0 (no levelling) and 1 (subtract the mean value) used to be available. For SPM data, you usually wish to pass 1. Since 2.56 you can also pass 2 for mean line subtraction.

Parameters:
  • mask - Mask specifying which values to take into account/exclude, or None. (DataField)
  • masking - Masking mode to use (has any effect only with non-None mask). Expected values: MASK_EXCLUDE, MASK_INCLUDE, MASK_IGNORE. (MaskingType)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • level - The first polynomial degree to keep in the rows, lower degrees than level are subtracted. (int)
  • weights - Line to store the denominators to (or None). It will be resized to match the returned line. The denominators are integers equal to the number of terms that contributed to each value. They are suitable as fitting weights if the ACF is fitted. (DataLine)
Returns:
A new one-dimensional data line with the ACF. (DataLine)

area_hhcf(target_line, col, row, width, height, orientation, interpolation, nstats)

source code 

Calculates one-dimensional autocorrelation function of a rectangular part of a data field.

Parameters:
  • target_line - A data line to store the distribution to. It will be resampled to requested width. (DataLine)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • orientation - Orientation of lines (HHCF is simply averaged over the other orientation). Expected values: ORIENTATION_HORIZONTAL, ORIENTATION_VERTICAL. (Orientation)
  • interpolation - Interpolation to use when nstats is given and requires resampling. Expected values: INTERPOLATION_NONE, INTERPOLATION_ROUND, INTERPOLATION_LINEAR, INTERPOLATION_BILINEAR, INTERPOLATION_KEY, INTERPOLATION_BSPLINE, INTERPOLATION_OMOMS, INTERPOLATION_NNA, INTERPOLATION_SCHAUM. (InterpolationType)
  • nstats - The number of samples to take on the distribution function. If nonpositive, width (height) is used. (int)

hhcf(target_line, orientation, interpolation, nstats)

source code 

Calculates one-dimensional autocorrelation function of a data field.

Parameters:
  • target_line - A data line to store the distribution to. It will be resampled to requested width. (DataLine)
  • orientation - Orientation of lines (HHCF is simply averaged over the other orientation). Expected values: ORIENTATION_HORIZONTAL, ORIENTATION_VERTICAL. (Orientation)
  • interpolation - Interpolation to use when nstats is given and requires resampling. Expected values: INTERPOLATION_NONE, INTERPOLATION_ROUND, INTERPOLATION_LINEAR, INTERPOLATION_BILINEAR, INTERPOLATION_KEY, INTERPOLATION_BSPLINE, INTERPOLATION_OMOMS, INTERPOLATION_NNA, INTERPOLATION_SCHAUM. (InterpolationType)
  • nstats - The number of samples to take on the distribution function. If nonpositive, data field width (height) is used. (int)

area_row_hhcf(mask, masking, col, row, width, height, level, weights)

source code 

Calculates the row-wise height-height correlation function (HHCF) of a rectangular part of a field.

The calculated HHCF has the natural number of points, i.e. width.

Masking is performed by omitting all terms that contain excluded pixels. Since different rows contain different numbers of pixels, the resulting HHCF values are calculated as a weighted sums where weight of each row's contribution is proportional to the number of contributing terms. In other words, the weighting is fair: each contributing pixel has the same influence on the result.

Only level values 0 (no levelling) and 1 (subtract the mean value) used to be available. There is no difference between them for HHCF. Since 2.56 you can also pass 2 for mean line subtraction.

Parameters:
  • mask - Mask specifying which values to take into account/exclude, or None. (DataField)
  • masking - Masking mode to use (has any effect only with non-None mask). Expected values: MASK_EXCLUDE, MASK_INCLUDE, MASK_IGNORE. (MaskingType)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • level - The first polynomial degree to keep in the rows, lower degrees than level are subtracted. (int)
  • weights - Line to store the denominators to (or None). It will be resized to match the returned line. The denominators are integers equal to the number of terms that contributed to each value. They are suitable as fitting weights if the HHCF is fitted. (DataLine)
Returns:
A new one-dimensional data line with the HHCF. (DataLine)

area_psdf(target_line, col, row, width, height, orientation, interpolation, windowing, nstats)

source code 

Calculates one-dimensional power spectrum density function of a rectangular part of a data field.

Parameters:
  • target_line - A data line to store the distribution to. It will be resampled to requested width. (DataLine)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • orientation - Orientation of lines (PSDF is simply averaged over the other orientation). Expected values: ORIENTATION_HORIZONTAL, ORIENTATION_VERTICAL. (Orientation)
  • interpolation - Interpolation to use when nstats is given and requires resampling. Expected values: INTERPOLATION_NONE, INTERPOLATION_ROUND, INTERPOLATION_LINEAR, INTERPOLATION_BILINEAR, INTERPOLATION_KEY, INTERPOLATION_BSPLINE, INTERPOLATION_OMOMS, INTERPOLATION_NNA, INTERPOLATION_SCHAUM. (InterpolationType)
  • windowing - Windowing type to use. Expected values: WINDOWING_NONE, WINDOWING_HANN, WINDOWING_HAMMING, WINDOWING_BLACKMANN, WINDOWING_LANCZOS, WINDOWING_WELCH, WINDOWING_RECT, WINDOWING_NUTTALL, WINDOWING_FLAT_TOP, WINDOWING_KAISER25. (WindowingType)
  • nstats - The number of samples to take on the distribution function. If nonpositive, data field width (height) is used. (int)

psdf(target_line, orientation, interpolation, windowing, nstats)

source code 

Calculates one-dimensional power spectrum density function of a data field.

Parameters:
  • target_line - A data line to store the distribution to. It will be resampled to requested width. (DataLine)
  • orientation - Orientation of lines (PSDF is simply averaged over the other orientation). Expected values: ORIENTATION_HORIZONTAL, ORIENTATION_VERTICAL. (Orientation)
  • interpolation - Interpolation to use when nstats is given and requires resampling. Expected values: INTERPOLATION_NONE, INTERPOLATION_ROUND, INTERPOLATION_LINEAR, INTERPOLATION_BILINEAR, INTERPOLATION_KEY, INTERPOLATION_BSPLINE, INTERPOLATION_OMOMS, INTERPOLATION_NNA, INTERPOLATION_SCHAUM. (InterpolationType)
  • windowing - Windowing type to use. Expected values: WINDOWING_NONE, WINDOWING_HANN, WINDOWING_HAMMING, WINDOWING_BLACKMANN, WINDOWING_LANCZOS, WINDOWING_WELCH, WINDOWING_RECT, WINDOWING_NUTTALL, WINDOWING_FLAT_TOP, WINDOWING_KAISER25. (WindowingType)
  • nstats - The number of samples to take on the distribution function. If nonpositive, data field width (height) is used. (int)

area_row_psdf(mask, masking, col, row, width, height, windowing, level)

source code 

Calculates the row-wise power spectrum density function (PSDF) of a rectangular part of a field.

The calculated PSDF has the natural number of points that follows from DFT, i.e. width/2+1.

The reduction of the total energy by windowing is compensated by multiplying the PSDF to make its sum of squares equal to the input data sum of squares.

Masking is performed by omitting all terms that contain excluded pixels. Since different rows contain different numbers of pixels, the resulting PSDF is calculated as a weighted sum where each row's weight is proportional to the number of contributing pixels. In other words, the weighting is fair: each contributing pixel has the same influence on the result.

Only level values 0 (no levelling) and 1 (subtract the mean value) used to be available. For SPM data, you usually wish to pass 1. Since 2.56 you can also pass 2 for mean line subtraction.

Do not assume the PSDF values are all positive, when masking is in effect. The PSDF should still have the correct integral, but it will be contaminated with noise, both positive and negative.

Parameters:
  • mask - Mask specifying which values to take into account/exclude, or None. (DataField)
  • masking - Masking mode to use (has any effect only with non-None mask). Expected values: MASK_EXCLUDE, MASK_INCLUDE, MASK_IGNORE. (MaskingType)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • windowing - Windowing type to use. Expected values: WINDOWING_NONE, WINDOWING_HANN, WINDOWING_HAMMING, WINDOWING_BLACKMANN, WINDOWING_LANCZOS, WINDOWING_WELCH, WINDOWING_RECT, WINDOWING_NUTTALL, WINDOWING_FLAT_TOP, WINDOWING_KAISER25. (WindowingType)
  • level - The first polynomial degree to keep in the rows; lower degrees than level are subtracted. (int)
Returns:
A new one-dimensional data line with the PSDF. (DataLine)

area_rpsdf(target_line, col, row, width, height, interpolation, windowing, nstats)

source code 

Calculates radial power spectrum density function of a rectangular part of a data field.

Parameters:
  • target_line - A data line to store the distribution to. It will be resampled to requested width. (DataLine)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • interpolation - Interpolation to use when nstats is given and requires resampling. Expected values: INTERPOLATION_NONE, INTERPOLATION_ROUND, INTERPOLATION_LINEAR, INTERPOLATION_BILINEAR, INTERPOLATION_KEY, INTERPOLATION_BSPLINE, INTERPOLATION_OMOMS, INTERPOLATION_NNA, INTERPOLATION_SCHAUM. (InterpolationType)
  • windowing - Windowing type to use. Expected values: WINDOWING_NONE, WINDOWING_HANN, WINDOWING_HAMMING, WINDOWING_BLACKMANN, WINDOWING_LANCZOS, WINDOWING_WELCH, WINDOWING_RECT, WINDOWING_NUTTALL, WINDOWING_FLAT_TOP, WINDOWING_KAISER25. (WindowingType)
  • nstats - The number of samples to take on the distribution function. If nonpositive, data field width (height) is used. (int)

Since: 2.7

rpsdf(target_line, interpolation, windowing, nstats)

source code 

Calculates radial power spectrum density function of a data field.

Parameters:
  • target_line - A data line to store the distribution to. It will be resampled to requested width. (DataLine)
  • interpolation - Interpolation to use when nstats is given and requires resampling. Expected values: INTERPOLATION_NONE, INTERPOLATION_ROUND, INTERPOLATION_LINEAR, INTERPOLATION_BILINEAR, INTERPOLATION_KEY, INTERPOLATION_BSPLINE, INTERPOLATION_OMOMS, INTERPOLATION_NNA, INTERPOLATION_SCHAUM. (InterpolationType)
  • windowing - Windowing type to use. Expected values: WINDOWING_NONE, WINDOWING_HANN, WINDOWING_HAMMING, WINDOWING_BLACKMANN, WINDOWING_LANCZOS, WINDOWING_WELCH, WINDOWING_RECT, WINDOWING_NUTTALL, WINDOWING_FLAT_TOP, WINDOWING_KAISER25. (WindowingType)
  • nstats - The number of samples to take on the distribution function. If nonpositive, data field width (height) is used. (int)

Since: 2.7

area_row_asg(mask, masking, col, row, width, height, level)

source code 

Calculates the row-wise area scale graph (ASG) of a rectangular part of a field.

The calculated ASG has the natural number of points, i.e. width-1.

The ASG represents the apparent area excess (ratio of surface and projected area minus one) observed at given length scale. The quantity calculated by this function serves a similar purpose as ASME B46.1 area scale graph but is defined differently, based on the HHCF. See DataField.area_row_hhcf() for details of its calculation.

Only level values 0 (no levelling) and 1 (subtract the mean value) used to be available. There is no difference between them for HHCF. Since 2.56 you can also pass 2 for mean line subtraction.

Parameters:
  • mask - Mask specifying which values to take into account/exclude, or None. (DataField)
  • masking - Masking mode to use (has any effect only with non-None mask). Expected values: MASK_EXCLUDE, MASK_INCLUDE, MASK_IGNORE. (MaskingType)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • level - The first polynomial degree to keep in the rows, lower degrees than level are subtracted. (int)
Returns:
A new one-dimensional data line with the ASG. (DataLine)

area_2dacf(target_field, col, row, width, height, xrange, yrange)

source code 

Calculates two-dimensional autocorrelation function of a data field area.

The resulting data field has the correlation corresponding to (0,0) in the centre.

The maximum possible values of xrange and yrange are data_field width and height, respectively. However, as the values for longer distances are calculated from smaller number of data points they become increasingly bogus, therefore the default range is half of the size.

Parameters:
  • target_field - A data field to store the result to. It will be resampled to (2xrange-1)×(2yrange-1). (DataField)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • xrange - Horizontal correlation range. Non-positive value means the default range of half of data_field width will be used. (int)
  • yrange - Vertical correlation range. Non-positive value means the default range of half of data_field height will be used. (int)

Since: 2.7

area_2dacf_mask(target_field, mask, masking, col, row, width, height, xrange, yrange, weights)

source code 

Calculates two-dimensional autocorrelation function of a data field area.

The resulting data field has the correlation corresponding to (0,0) in the centre.

The maximum possible values of xrange and yrange are data_field width and height, respectively. However, as the values for longer distances are calculated from smaller number of data points they become increasingly bogus, therefore the default range is half of the size.

Parameters:
  • target_field - A data field to store the result to. It will be resampled to (2xrange-1)×(2yrange-1). (DataField)
  • mask - Mask specifying which values to take into account/exclude, or None. (DataField)
  • masking - Masking mode to use (has any effect only with non-None mask). Expected values: MASK_EXCLUDE, MASK_INCLUDE, MASK_IGNORE. (MaskingType)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • xrange - Horizontal correlation range. Non-positive value means the default range of half of data_field width will be used. (int)
  • yrange - Vertical correlation range. Non-positive value means the default range of half of data_field height will be used. (int)
  • weights - Field to store the denominators to (or None). It will be resized like target_field. The denominators are integers equal to the number of terms that contributed to each value. They are suitable as fitting weights if the ACF is fitted. (DataField)

Since: 2.50

acf2d(target_field)

source code 

Calculates two-dimensional autocorrelation function of a data field.

See DataField.area_2dacf() for details. Parameters missing (not adjustable) in this function are set to their default values.

Parameters:
  • target_field - A data field to store the result to. (DataField)

Since: 2.7

area_2dpsdf_mask(target_field, mask, masking, col, row, width, height, windowing, level)

source code 

Calculates two-dimensional power spectrum density function of a data field area.

The resulting data field has the spectrum density corresponding zero frequency (0,0) in the centre.

Only level values 0 (no levelling) and 1 (subtract the mean value) used to be available. For SPM data, you usually wish to pass 1. Since 2.56 you can also pass 2 for mean plane subtraction.

The reduction of the total energy by windowing is compensated by multiplying the PSDF to make its sum of squares equal to the input data sum of squares.

Do not assume the PSDF values are all positive, when masking is in effect. The PSDF should still have the correct integral, but it will be contaminated with noise, both positive and negative.

Parameters:
  • target_field - A data field to store the result to. It will be resampled to width×height. (DataField)
  • mask - Mask specifying which values to take into account/exclude, or None. (DataField)
  • masking - Masking mode to use (has any effect only with non-None mask). Expected values: MASK_EXCLUDE, MASK_INCLUDE, MASK_IGNORE. (MaskingType)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • windowing - Windowing type to use. Expected values: WINDOWING_NONE, WINDOWING_HANN, WINDOWING_HAMMING, WINDOWING_BLACKMANN, WINDOWING_LANCZOS, WINDOWING_WELCH, WINDOWING_RECT, WINDOWING_NUTTALL, WINDOWING_FLAT_TOP, WINDOWING_KAISER25. (WindowingType)
  • level - The first polynomial degree to keep in the area; lower degrees than level are subtracted. (int)

Since: 2.51

psdf2d(target_field, windowing, level)

source code 

Calculates two-dimensional power spectrum density function of a data field.

See DataField.area_2dpsdf_mask() for details and discussion.

Parameters:
  • target_field - A data field to store the result to. It will be resampled to the same size as data_field. (DataField)
  • windowing - Windowing type to use. Expected values: WINDOWING_NONE, WINDOWING_HANN, WINDOWING_HAMMING, WINDOWING_BLACKMANN, WINDOWING_LANCZOS, WINDOWING_WELCH, WINDOWING_RECT, WINDOWING_NUTTALL, WINDOWING_FLAT_TOP, WINDOWING_KAISER25. (WindowingType)
  • level - The first polynomial degree to keep in the area; lower degrees than level are subtracted. Note only values 0, 1, and 2 are available at present. For SPM data, you usually wish to pass 1. (int)

Since: 2.51

area_racf(target_line, col, row, width, height, nstats)

source code 

Calculates radially averaged autocorrelation function of a rectangular part of a data field.

Parameters:
  • target_line - A data line to store the autocorrelation function to. It will be resampled to requested width. (DataLine)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • nstats - The number of samples to take on the autocorrelation function. If nonpositive, a suitable resolution is chosen automatically. (int)

Since: 2.22

racf(target_line, nstats)

source code 

Calculates radially averaged autocorrelation function of a data field.

Parameters:
  • target_line - A data line to store the autocorrelation function to. It will be resampled to requested width. (DataLine)
  • nstats - The number of samples to take on the autocorrelation function. If nonpositive, a suitable resolution is chosen automatically. (int)

Since: 2.22

area_minkowski_volume(target_line, col, row, width, height, nstats)

source code 

Calculates Minkowski volume functional of a rectangular part of a data field.

Volume functional is calculated as the number of values above each threshold value (,white pixels`) divided by the total number of samples in the area. Is it's equivalent to 1-CDH.

Parameters:
  • target_line - A data line to store the distribution to. It will be resampled to requested width. (DataLine)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • nstats - The number of samples to take on the distribution function. If nonpositive, a suitable resolution is determined automatically. (int)

minkowski_volume(target_line, nstats)

source code 

Calculates Minkowski volume functional of a data field.

See DataField.area_minkowski_volume() for details.

Parameters:
  • target_line - A data line to store the distribution to. It will be resampled to requested width. (DataLine)
  • nstats - The number of samples to take on the distribution function. If nonpositive, a suitable resolution is determined automatically. (int)

area_minkowski_boundary(target_line, col, row, width, height, nstats)

source code 

Calculates Minkowski boundary functional of a rectangular part of a data field.

Boundary functional is calculated as the number of boundaries for each threshold value (the number of pixel sides where of neighouring pixels is ,white` and the other ,black`) divided by the total number of samples in the area.

Parameters:
  • target_line - A data line to store the distribution to. It will be resampled to requested width. (DataLine)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • nstats - The number of samples to take on the distribution function. If nonpositive, a suitable resolution is determined automatically. (int)

minkowski_boundary(target_line, nstats)

source code 

Calculates Minkowski boundary functional of a data field.

See DataField.area_minkowski_boundary() for details.

Parameters:
  • target_line - A data line to store the distribution to. It will be resampled to requested width. (DataLine)
  • nstats - The number of samples to take on the distribution function. If nonpositive, a suitable resolution is determined automatically. (int)

area_minkowski_euler(target_line, col, row, width, height, nstats)

source code 

Calculates Minkowski connectivity functional (Euler characteristics) of a rectangular part of a data field.

Connectivity functional is calculated as the number connected areas of pixels above threhsold (,white`) minus the number of connected areas of pixels below threhsold (,black`) for each threshold value, divided by the total number of samples in the area.

Parameters:
  • target_line - A data line to store the distribution to. It will be resampled to requested width. (DataLine)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • nstats - The number of samples to take on the distribution function. If nonpositive, a suitable resolution is determined automatically. (int)

minkowski_euler(target_line, nstats)

source code 

Calculates Minkowski connectivity functional (Euler characteristics) of a data field.

See DataField.area_minkowski_euler() for details.

Parameters:
  • target_line - A data line to store the distribution to. It will be resampled to requested width. (DataLine)
  • nstats - The number of samples to take on the distribution function. If nonpositive, a suitable resolution is determined automatically. (int)

area_get_dispersion(mask, masking, col, row, width, height)

source code 

Calculates the dispersion of a data field area, taking it as a distribution.

The function takes data_field as a distribution, finds the centre of mass in the area and then calculates the mean squared distance from this centre, weighted by data_field values. Normally data_field should contain only non-negative data.

The dispersion is measured in real coordinates, so horizontal and vertical pixel sizes play a role and the units are squared lateral units of data_field. Note, however, that xcenter and ycenter is returned in pixel coordinates since it is usually more convenient.

Parameters:
  • mask - Mask specifying which values to take into account/exclude, or None. (DataField)
  • masking - Masking mode to use (has any effect only with non-None mask). Expected values: MASK_EXCLUDE, MASK_INCLUDE, MASK_IGNORE. (MaskingType)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
Returns:
Tuple consisting of 3 values (value, xcenter, ycenter). ((float), (float), (float))

Since: 2.52

get_dispersion()

source code 

Calculates the dispersion of a data field, taking it as a distribution.

See DataField.area_get_dispersion() for discussion.

Returns:
Tuple consisting of 3 values (value, xcenter, ycenter). ((float), (float), (float))

Since: 2.52

slope_distribution(derdist, kernel_size)

source code 

Computes angular slope distribution.

Parameters:
  • derdist - A data line to fill with angular slope distribution. Its resolution determines resolution of the distribution. (DataLine)
  • kernel_size - If positive, local plane fitting will be used for slope computation; if nonpositive, plain central derivations will be used. (int)

get_normal_coeffs(normalize1)

source code 

Computes average normal vector of a data field.

Parameters:
  • normalize1 - true to normalize the normal vector to 1, false to normalize the vector so that z-component is 1. (bool)
Returns:
Tuple consisting of 3 values (nx, ny, nz). ((float), (float), (float))

area_get_normal_coeffs(col, row, width, height, normalize1)

source code 

Computes average normal vector of an area of a data field.

Parameters:
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • normalize1 - true to normalize the normal vector to 1, false to normalize the vector so that z-component is 1. (bool)
Returns:
Tuple consisting of 3 values (nx, ny, nz). ((float), (float), (float))

area_get_inclination(col, row, width, height)

source code 

Calculates the inclination of the image (polar and azimuth angle).

Parameters:
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
Returns:
Tuple consisting of 2 values (theta, phi). ((float), (float))

get_inclination()

source code 

Calculates the inclination of the image (polar and azimuth angle).

Returns:
Tuple consisting of 2 values (theta, phi). ((float), (float))

area_get_line_stats(mask, target_line, col, row, width, height, quantity, orientation)

source code 

Calculates a line quantity for each row or column in a data field area.

Use DataField.get_line_stats_mask() for full masking type options.

Parameters:
  • mask - Mask of values to take values into account, or None for full data_field. (DataField)
  • target_line - A data line to store the distribution to. It will be resampled to the number of rows (columns). (DataLine)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • quantity - The line quantity to calulate for each row (column). Expected values: LINE_STAT_MEAN, LINE_STAT_MEDIAN, LINE_STAT_MINIMUM, LINE_STAT_MAXIMUM, LINE_STAT_RMS, LINE_STAT_LENGTH, LINE_STAT_SLOPE, LINE_STAT_TAN_BETA0, LINE_STAT_RA, LINE_STAT_RZ, LINE_STAT_RT, LINE_STAT_SKEW, LINE_STAT_KURTOSIS, LINE_STAT_RANGE, LINE_STAT_VARIATION, LINE_STAT_MINPOS, LINE_STAT_MAXPOS. (LineStatQuantity)
  • orientation - Line orientation. For ORIENTATION_HORIZONTAL each target_line point corresponds to a row of the area, for ORIENTATION_VERTICAL each target_line point corresponds to a column of the area. Expected values: ORIENTATION_HORIZONTAL, ORIENTATION_VERTICAL. (Orientation)

Since: 2.2

get_line_stats_mask(mask, masking, target_line, weights, col, row, width, height, quantity, orientation)

source code 

Calculates a line quantity for each row or column in a data field area.

Parameters:
  • mask - Mask of values to take values into account, or None for full data_field. (DataField)
  • masking - Masking mode to use. See the introduction for description of masking modes. Expected values: MASK_EXCLUDE, MASK_INCLUDE, MASK_IGNORE. (MaskingType)
  • target_line - A data line to store the distribution to. It will be resampled to the number of rows (columns). (DataLine)
  • weights - A data line to store number of data points contributing to each value in target_line, or None. It is useful when masking is used to possibly exclude values calculated from too few data points. (DataLine)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • quantity - The line quantity to calulate for each row (column). Expected values: LINE_STAT_MEAN, LINE_STAT_MEDIAN, LINE_STAT_MINIMUM, LINE_STAT_MAXIMUM, LINE_STAT_RMS, LINE_STAT_LENGTH, LINE_STAT_SLOPE, LINE_STAT_TAN_BETA0, LINE_STAT_RA, LINE_STAT_RZ, LINE_STAT_RT, LINE_STAT_SKEW, LINE_STAT_KURTOSIS, LINE_STAT_RANGE, LINE_STAT_VARIATION, LINE_STAT_MINPOS, LINE_STAT_MAXPOS. (LineStatQuantity)
  • orientation - Line orientation. For ORIENTATION_HORIZONTAL each target_line point corresponds to a row of the area, for ORIENTATION_VERTICAL each target_line point corresponds to a column of the area. Expected values: ORIENTATION_HORIZONTAL, ORIENTATION_VERTICAL. (Orientation)

Since: 2.46

get_line_stats(target_line, quantity, orientation)

source code 

Calculates a line quantity for each row or column of a data field.

Parameters:
  • target_line - A data line to store the distribution to. It will be resampled to data_field height (width). (DataLine)
  • quantity - The line quantity to calulate for each row (column). Expected values: LINE_STAT_MEAN, LINE_STAT_MEDIAN, LINE_STAT_MINIMUM, LINE_STAT_MAXIMUM, LINE_STAT_RMS, LINE_STAT_LENGTH, LINE_STAT_SLOPE, LINE_STAT_TAN_BETA0, LINE_STAT_RA, LINE_STAT_RZ, LINE_STAT_RT, LINE_STAT_SKEW, LINE_STAT_KURTOSIS, LINE_STAT_RANGE, LINE_STAT_VARIATION, LINE_STAT_MINPOS, LINE_STAT_MAXPOS. (LineStatQuantity)
  • orientation - Line orientation. See DataField.area_get_line_stats(). Expected values: ORIENTATION_HORIZONTAL, ORIENTATION_VERTICAL. (Orientation)

Since: 2.2

count_maxima()

source code 

Counts the number of regional maxima in a data field.

See DataField.mark_extrema() for the definition of a regional maximum.

Returns:
The number of regional maxima. (int)

Since: 2.38

count_minima()

source code 

Counts the number of regional minima in a data field.

See DataField.mark_extrema() for the definition of a regional minimum.

Returns:
The number of regional minima. (int)

Since: 2.38

psdf_to_angular_spectrum(nstats)

source code 

Transforms 2D power spectral density to an angular spectrum.

Parameters:
  • nstats - The number of samples to take on the distribution function. If nonpositive, a suitable number is chosen automatically. (int)
Returns:
A new one-dimensional data line with the angular spectrum. (DataLine)

Since: 2.56

angular_average(target_line, mask, masking, x, y, r, nstats)

source code 

Performs angular averaging of a part of a data field.

The result of such averaging is an radial profile, starting from the disc centre.

The function does not guarantee that target_line will have exactly nstats samples upon return. A smaller number of samples than requested may be calculated for instance if either central or outer part of the disc is excluded by masking.

Parameters:
  • target_line - A data line to store the distribution to. It will be resampled to nstats size. (DataLine)
  • mask - Mask of pixels to include from/exclude in the averaging, or None for full data_field. (DataField)
  • masking - Masking mode to use. See the introduction for description of masking modes. Expected values: MASK_EXCLUDE, MASK_INCLUDE, MASK_IGNORE. (MaskingType)
  • x - X-coordinate of the averaging disc origin, in real coordinates including offsets. (float)
  • y - Y-coordinate of the averaging disc origin, in real coordinates including offsets. (float)
  • r - Radius, in real coordinates. It determines the real length of the resulting line. (float)
  • nstats - The number of samples the resulting line should have. A non-positive value means the sampling will be determined automatically. (int)

Since: 2.42

copy_units_to_surface(surface)

source code 

Sets lateral and value units of a surface to match a data field.

Parameters:

Since: 2.46

get_data()

source code 

Extract the data of a data field.

The returned list contains a copy of the data. Changing its contents does not change the data field's data.

Returns:
List containing extracted data field data. (list)

set_data(data)

source code 

Sets the entire contents of a data field.

The length of data must be equal to the number of elements of the data field.

Parameters:
  • data - Sequence of floating point values. (list)

fit_polynom(col_degree, row_degree)

source code 

Fits a two-dimensional polynomial to a data field.

Parameters:
  • col_degree - Degree of polynomial to fit column-wise (x-coordinate). (int)
  • row_degree - Degree of polynomial to fit row-wise (y-coordinate). (int)
Returns:
a newly allocated array with coefficients. (list)

area_fit_polynom(col, row, width, height, col_degree, row_degree)

source code 

Fits a two-dimensional polynomial to a rectangular part of a data field.

The coefficients are stored by row into coeffs, like data in a datafield. Row index is y-degree, column index is x-degree.

Note naive x^n y^m polynomial fitting is numerically unstable, therefore this method works only up to col_degree = row_degree = 6.

Parameters:
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • col_degree - Degree of polynomial to fit column-wise (x-coordinate). (int)
  • row_degree - Degree of polynomial to fit row-wise (y-coordinate). (int)
Returns:
a newly allocated array with coefficients. (list)

subtract_polynom(col_degree, row_degree, coeffs)

source code 

Subtracts a two-dimensional polynomial from a data field.

Parameters:
  • col_degree - Degree of polynomial to subtract column-wise (x-coordinate). (int)
  • row_degree - Degree of polynomial to subtract row-wise (y-coordinate). (int)
  • coeffs - An array of size (row_degree+1)*(col_degree+1) with coefficients, see DataField.area_fit_polynom() for details. (list)

area_subtract_polynom(col, row, width, height, col_degree, row_degree, coeffs)

source code 

Subtracts a two-dimensional polynomial from a rectangular part of a data field.

Parameters:
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • col_degree - Degree of polynomial to subtract column-wise (x-coordinate). (int)
  • row_degree - Degree of polynomial to subtract row-wise (y-coordinate). (int)
  • coeffs - An array of size (row_degree+1)*(col_degree+1) with coefficients, see DataField.area_fit_polynom() for details. (list)

fit_legendre(col_degree, row_degree)

source code 

Fits two-dimensional Legendre polynomial to a data field.

See DataField.area_fit_legendre() for details.

Parameters:
  • col_degree - Degree of polynomial to fit column-wise (x-coordinate). (int)
  • row_degree - Degree of polynomial to fit row-wise (y-coordinate). (int)
Returns:
Either coeffs if it was not None, or a newly allocated array with coefficients. (list)

area_fit_legendre(col, row, width, height, col_degree, row_degree)

source code 

Fits two-dimensional Legendre polynomial to a rectangular part of a data field.

The col_degree and row_degree parameters limit the maximum powers of x and y exactly as if simple powers were fitted, therefore if you do not intend to interpret contents of coeffs youself, the only difference is that this method is much more numerically stable.

The coefficients are organized exactly like in DataField.area_fit_polynom(), but they are not coefficients of x^n y^m, instead they are coefficients of P_n(x) P_m(x), where P are Legendre polynomials. The polynomials are evaluated in coordinates where first row (column) corresponds to -1.0, and the last row (column) to 1.0.

Note the polynomials are normal Legendre polynomials that are not exactly orthogonal on a discrete point set (if their degrees are equal mod 2).

Parameters:
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • col_degree - Degree of polynomial to fit column-wise (x-coordinate). (int)
  • row_degree - Degree of polynomial to fit row-wise (y-coordinate). (int)
Returns:
Either coeffs if it was not None, or a newly allocated array with coefficients. (list)

subtract_legendre(col_degree, row_degree, coeffs)

source code 

Subtracts a two-dimensional Legendre polynomial fit from a data field.

Parameters:
  • col_degree - Degree of polynomial to subtract column-wise (x-coordinate). (int)
  • row_degree - Degree of polynomial to subtract row-wise (y-coordinate). (int)
  • coeffs - An array of size (row_degree+1)*(col_degree+1) with coefficients, see DataField.area_fit_legendre() for details. (list)

area_subtract_legendre(col, row, width, height, col_degree, row_degree, coeffs)

source code 

Subtracts a two-dimensional Legendre polynomial fit from a rectangular part of a data field.

Due to the transform of coordinates to [-1,1] x [-1,1], this method can be used on an area of dimensions different than the area the coefficients were calculated for.

Parameters:
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • col_degree - Degree of polynomial to subtract column-wise (x-coordinate). (int)
  • row_degree - Degree of polynomial to subtract row-wise (y-coordinate). (int)
  • coeffs - An array of size (row_degree+1)*(col_degree+1) with coefficients, see DataField.area_fit_legendre() for details. (list)

fit_poly_max(max_degree)

source code 

Fits two-dimensional polynomial with limited total degree to a data field.

See DataField.area_fit_poly_max() for details.

Parameters:
  • max_degree - Maximum total polynomial degree, that is the maximum of m+n in x^n y^m terms. (int)
Returns:
Either coeffs if it was not None, or a newly allocated array with coefficients. (list)

area_fit_poly_max(col, row, width, height, max_degree)

source code 

Fits two-dimensional polynomial with limited total degree to a rectangular part of a data field.

See DataField.area_fit_legendre() for description. This function differs by limiting the total maximum degree, while DataField.area_fit_legendre() limits the maximum degrees in horizontal and vertical directions independently.

Parameters:
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • max_degree - Maximum total polynomial degree, that is the maximum of m+n in x^n y^m terms. (int)
Returns:
Either coeffs if it was not None, or a newly allocated array with coefficients. (list)

subtract_poly_max(max_degree, coeffs)

source code 

Subtracts a two-dimensional polynomial with limited total degree from a data field.

Parameters:
  • max_degree - Maximum total polynomial degree, that is the maximum of m+n in x^n y^m terms. (int)
  • coeffs - An array of size (row_degree+1)*(col_degree+2)/2 with coefficients, see DataField.area_fit_poly_max() for details. (list)

area_subtract_poly_max(col, row, width, height, max_degree, coeffs)

source code 

Subtracts a two-dimensional polynomial with limited total degree from a rectangular part of a data field.

Due to the transform of coordinates to [-1,1] x [-1,1], this method can be used on an area of dimensions different than the area the coefficients were calculated for.

Parameters:
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • max_degree - Maximum total polynomial degree, that is the maximum of m+n in x^n y^m terms. (int)
  • coeffs - An array of size (row_degree+1)*(col_degree+2)/2 with coefficients, see DataField.area_fit_poly_max() for details. (list)

fit_poly(mask_field, term_powers, exclude)

source code 

Fit a given set of polynomial terms to a data field.

Parameters:
  • mask_field - Mask of values to take values into account, or None for full data_field. Values equal to 0.0 and below cause corresponding data_field samples to be ignored, values equal to 1.0 and above cause inclusion of corresponding data_field samples. The behaviour for values inside (0.0, 1.0) is undefined (it may be specified in the future). (DataField)
  • term_powers - Array of size 2*nterms describing the terms to fit. Each terms is described by a couple of powers (powerx, powery). (list)
  • exclude - Interpret values w in the mask as 1.0-w. (bool)
Returns:
Value coeffs. ((list))

Since: 2.11

area_fit_poly(mask_field, col, row, width, height, term_powers, exclude)

source code 

Fit a given set of polynomial terms to a rectangular part of a data field.

The polynomial coefficients correspond to normalized coordinates that are always from the interval [-1,1] where -1 corresponds to the left/topmost pixel and 1 corresponds to the bottom/rightmost pixel of the area.

Parameters:
  • mask_field - Mask of values to take values into account, or None for full data_field. Values equal to 0.0 and below cause corresponding data_field samples to be ignored, values equal to 1.0 and above cause inclusion of corresponding data_field samples. The behaviour for values inside (0.0, 1.0) is undefined (it may be specified in the future). (DataField)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • term_powers - Array of size 2*nterms describing the terms to fit. Each terms is described by a couple of powers (powerx, powery). (list)
  • exclude - Interpret values w in the mask as 1.0-w. (bool)
Returns:
Value coeffs. ((list))

Since: 2.11

subtract_poly(term_powers, coeffs)

source code 

Subtract a given set of polynomial terms from a data field.

Parameters:
  • term_powers - Array of size 2*nterms describing the fitter terms. Each terms is described by a couple of powers (powerx, powery). (list)
  • coeffs - Array of size nterms to store with the coefficients. (list)

Since: 2.11

area_subtract_poly(col, row, width, height, term_powers, coeffs)

source code 

Subtract a given set of polynomial terms from a rectangular part of a data field.

Parameters:
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • term_powers - Array of size 2*nterms describing the fitted terms. Each terms is described by a couple of powers (powerx, powery). (list)
  • coeffs - Array of size nterms to store with the coefficients. (list)

Since: 2.11

area_fit_local_planes(size, col, row, width, height, types)

source code 

Fits a plane through neighbourhood of each sample in a rectangular part of a data field.

The sample is always in the origin of its local (x,y) coordinate system, even if the neighbourhood is not centered about it (e.g. because sample is on the edge of data field). Z-coordinate is however not centered, that is PLANE_FIT_A is normal mean value.

Parameters:
  • size - Neighbourhood size (must be at least 2). It is centered around each pixel, unless size is even when it sticks to the right. (int)
  • col - Upper-left column coordinate. (int)
  • row - Upper-left row coordinate. (int)
  • width - Area width (number of columns). (int)
  • height - Area height (number of rows). (int)
  • types - The types of requested quantities. (list)
Returns:
An array of data fields with requested quantities, that is results unless it was None and a new array was allocated. (list)

fit_local_planes(size, types)

source code 

Fits a plane through neighbourhood of each sample in a data field.

See DataField.area_fit_local_planes() for details.

Parameters:
  • size - Neighbourhood size. (int)
  • types - The types of requested quantities. (list)
Returns:
An array of data fields with requested quantities. (list)

elliptic_area_extract(col, row, width, height)

source code 

Extracts values from an elliptic region of a data field.

The elliptic region is defined by its bounding box which must be completely contained in the data field.

Parameters:
  • col - Upper-left bounding box column coordinate. (int)
  • row - Upper-left bounding box row coordinate. (int)
  • width - Bounding box width (number of columns). (int)
  • height - Bounding box height (number of rows). (int)
Returns:
The number of extracted values. (list)

elliptic_area_unextract(col, row, width, height, data)

source code 

Puts values back to an elliptic region of a data field.

The elliptic region is defined by its bounding box. In versions prior to 2.59 the bounding box must be completely contained in the data field. Since version 2.59 the ellipse can intersect the data field in any manner.

This method does the reverse of DataField.elliptic_area_extract() allowing to implement pixel-wise filters on elliptic areas. Values from data are put back to the same positions DataField.elliptic_area_extract() took them from.

Parameters:
  • col - Upper-left bounding box column coordinate. (int)
  • row - Upper-left bounding box row coordinate. (int)
  • width - Bounding box width (number of columns). (int)
  • height - Bounding box height (number of rows). (int)
  • data - The values to put back. It must be the same array as in previous DataField.elliptic_area_extract(). (list)

circular_area_extract(col, row, radius)

source code 

Extracts values from a circular region of a data field.

Parameters:
  • col - Row index of circular area centre. (int)
  • row - Column index of circular area centre. (int)
  • radius - Circular area radius (in pixels). See DataField.circular_area_extract_with_pos() for caveats. (float)
Returns:
Array of values. (list)

circular_area_unextract(col, row, radius, data)

source code 

Puts values back to a circular region of a data field.

This method does the reverse of DataField.circular_area_extract() allowing to implement pixel-wise filters on circular areas. Values from data are put back to the same positions DataField.circular_area_extract() took them from.

Parameters:
  • col - Row index of circular area centre. (int)
  • row - Column index of circular area centre. (int)
  • radius - Circular area radius (in pixels). (float)
  • data - The values to put back. It must be the same array as in previous DataField.circular_area_unextract(). (list)

circular_area_extract_with_pos(col, row, radius)

source code 

Extracts values with positions from a circular region of a data field.

The row and column indices stored to xpos and ypos are relative to the area centre, i.e. to (col, row). The central pixel will therefore have 0 at the corresponding position in both xpos and ypos.

Parameters:
  • col - Row index of circular area centre. (int)
  • row - Column index of circular area centre. (int)
  • radius - Circular area radius (in pixels). Any value is allowed, although to get areas that do not deviate from true circles after pixelization too much, half-integer values are recommended, integer radii are NOT recommended. (float)
Returns:
Tuple consisting of 3 values (value, xpos, ypos). ((list), (list), (list))

Since: 2.2

local_maximum(x, y, ax, ay)

source code 

Searches an elliptical area in a data field for local maximum.

The area may stick outside the data field.

The function first finds the maximum within the ellipse, intersected with the data field and then tries subpixel refinement. The maximum is considered successfully located if it is inside the data field, i.e. not on edge, there is no higher value in its 8-neighbourhood, and the subpixel refinement of its position succeeds (which usually happens when the first two conditions are met, but not always).

Even if the function returns False the values of x and y are reasonable, but they may not correspond to an actual maximum.

The radii can be zero. A single pixel is then examined, but if it is indeed a local maximum, its position is refined.

Parameters:
  • x - Approximate maximum x-location to be improved (in pixels). (float)
  • y - Approximate maximum y-location to be improved (in pixels). (float)
  • ax - Horizontal search radius. (int)
  • ay - Vertical search radius. (int)
Returns:
Tuple consisting of 3 values (value, x_out, y_out). ((bool), (float), (float))

Since: 2.49

affine(dest, affine, interp, exterior, fill_value)

source code 

Performs an affine transformation of a data field in the horizontal plane.

Note the transform invtrans is the inverse transform, in other words it calculates the old coordinates from the new coordinates. This way even degenerate (non-invertible) transforms can be meaningfully used. Also note that the (column, row) coordinate system is left-handed.

The EXTERIOR_LAPLACE exterior type cannot be used with this function.

Parameters:
  • dest - Destination data field. (DataField)
  • affine - (list)
  • interp - Interpolation type to use. Expected values: INTERPOLATION_NONE, INTERPOLATION_ROUND, INTERPOLATION_LINEAR, INTERPOLATION_BILINEAR, INTERPOLATION_KEY, INTERPOLATION_BSPLINE, INTERPOLATION_OMOMS, INTERPOLATION_NNA, INTERPOLATION_SCHAUM. (InterpolationType)
  • exterior - Exterior pixels handling. Expected values: EXTERIOR_UNDEFINED, EXTERIOR_BORDER_EXTEND, EXTERIOR_MIRROR_EXTEND, EXTERIOR_PERIODIC, EXTERIOR_FIXED_VALUE, EXTERIOR_LAPLACE. (ExteriorType)
  • fill_value - The value to use with EXTERIOR_FIXED_VALUE. (float)

Since: 2.34

affine_prepare(dest, a1a2, a1a2_corr, scaling, prevent_rotation, oversampling)

source code 

Resolves an affine transformation of a data field in the horizontal plane.

This function calculates suitable arguments for DataField.affine() from given images and lattice vectors (in real coordinates).

Data field dest will be resized and its real dimensions and units set in anticipation of DataField.affine(). Its contents will be destroyed.

Note that a1a2_corr is an input-output parameter. In general, the vectors will be modified according to scaling and prevent_rotation to the actual vectors in dest after the transformation. Only if prevent_rotation is False and scaling is AFFINE_SCALING_AS_GIVEN the vectors are preserved.

Parameters:
  • dest - Destination data field. (DataField)
  • a1a2 - Lattice vectors (or generally base vectors) in source, as an array of four components: x1, y1, x2 and y2. (list)
  • a1a2_corr - Correct lattice vectors (or generally base vectors) dest should have after the affine transform, in the same form as a1a2. (list)
  • scaling - How (or if) to scale the correct lattice vectors. Expected values: AFFINE_SCALING_AS_GIVEN, AFFINE_SCALING_PRESERVE_AREA, AFFINE_SCALING_PRESERVE_X. (AffineScalingType)
  • prevent_rotation - True to prevent rotation of the data by rotating a1a2_corr as a whole to a direction preserving the data orientation. False to take a1a2_corr as given. (bool)
  • oversampling - Oversampling factor. Values larger than 1 mean smaller pixels (and more of them) in dest, values smaller than 1 the opposite. Pass 1.0 for the default pixel size choice. (float)
Returns:
Tuple consisting of 2 values (a1a2_corr_out, invtrans). ((list), (list))

Since: 2.49

waterpour(result)

source code 

Performs the classical Vincent watershed segmentation of a data field.

The segmentation always results in the entire field being masked with the exception of thin (8-connectivity) lines separating the segments (grains).

Compared to DataField.grains_mark_watershed(), this algorithm is very fast. However, when used alone, it typically results in a serious oversegmentation as each local minimum gives raise to a grain. Furthermore, the full segmentation means that also pixels which would be considered outside any grain in the topographical sense will be assigned to some catchment basin. Therefore, pre- or postprocessing is usually necessary, using the gradient image or a more sophisticated method.

The function does not assign pixels with value HUGE_VAL or larger to any segment. This can be used to pre-mark certain areas explicitly as boundaries.

Since the algorithm numbers the grains as a side effect, you can pass a grains array and get the grain numbers immediatelly, avoiding the relatively (although not drastically) expensive DataField.number_grains() call.

Parameters:
  • result - Data field that will be filled with the resulting mask. It will be resized to the dimensions of data_field and its properties set accordingly. (DataField)
Returns:
Tuple consisting of 2 values (value, grains). ((int), (list))

Since: 2.37

measure_lattice_acf(a1a2)

source code 

Estimates or improves estimate of lattice vectors from a 2D ACF field.

Note that the 2D ACF of a data field has to be passed, not the data field itself. The correlation function can be for instance calculated by DataField.acf2d(). However, you can calculate and/or process the correlation function in any way you see fit.

When the vectors in a1a2 are zero the function attempts to estimate the lattice from scratch. But if a1a2 contains two non-zero vectors it takes them as approximate lattice vectors to improve.

If the function return False the array a1a2 is filled with useless values and must be ignored.

Parameters:
  • a1a2 - Lattice vectors as an array of four components: x1, y1, x2 and y2 (in real coordinates). (list)
Returns:
Tuple consisting of 2 values (a1a2_out, succeeded). ((list), (BooleanOutArg))

Since: 2.49

measure_lattice_psdf(a1a2)

source code 

Estimates or improves estimate of lattice vectors from a 2D PSDF field.

Note that the 2D PSDF of a data field has to be passed, not the data field itself. The spectral density can be for instance calculated by DataField.fft2d() and summing the squares of real and imaginary parts However, you can calculate and/or process the spectral density in any way you see fit.

When the vectors in a1a2 are zero the function attempts to estimate the lattice from scratch. But if a1a2 contains two non-zero vectors it takes them as approximate lattice vectors to improve.

If the function return False the array a1a2 is filled with useless values and must be ignored.

Parameters:
  • a1a2 - Lattice vectors as an array of four components: x1, y1, x2 and y2 (in real coordinates). (list)
Returns:
Tuple consisting of 2 values (a1a2_out, succeeded). ((list), (BooleanOutArg))

Since: 2.49

get_local_maxima_list(ndata, skip, threshold, subpixel)

source code 

Locates local maxima in a data field.

At most ndata maxima are located (with the largest values).

Parameters:
  • ndata - Number of items in xdata, ydata and zdata. (int)
  • skip - Minimum pixel distance between maxima. (int)
  • threshold - Minimum value to be considered a maximum. (float)
  • subpixel - True for subpixel refinement. (bool)
Returns:
Tuple consisting of 3 values (xdata, ydata, zdata). ((list), (list), (list))

get_profile_mask(mask, masking, xfrom, yfrom, xto, yto, res, thickness, interpolation)

source code 

Extracts a possibly averaged profile from data field, with masking.

The extracted profile can contain holes due to masking. It can also contain no points at all if the all data values along the profile were excluded due to masking – in this case None is returned.

Unlike DataField.get_profile(), this function takes real coordinates (without offsets), not row and column indices.

Parameters:
  • mask - Mask specifying which values to take into account/exclude, or None. (DataField)
  • masking - Masking mode to use. Expected values: MASK_EXCLUDE, MASK_INCLUDE, MASK_IGNORE. (MaskingType)
  • xfrom - The real x-coordinate where the line starts. (float)
  • yfrom - The real y-coordinate where line starts. (float)
  • xto - The real x-coordinate where the line ends. (float)
  • yto - The real y-coordinate where line ends. (float)
  • res - Requested resolution, i.e. the number of samples to take. If nonpositive, sampling is chosen to match data_field's. (int)
  • thickness - Thickness of line to be averaged. (int)
  • interpolation - Interpolation type to use. Expected values: INTERPOLATION_NONE, INTERPOLATION_ROUND, INTERPOLATION_LINEAR, INTERPOLATION_BILINEAR, INTERPOLATION_KEY, INTERPOLATION_BSPLINE, INTERPOLATION_OMOMS, INTERPOLATION_NNA, INTERPOLATION_SCHAUM. (InterpolationType)
Returns:
A newly allocated array of XY coordinare pairs, or None. The caller must free the returned array with g_free(). (list)

Since: 2.49

number_grains()

source code 

Constructs an array with grain numbers from a mask data field.

Returns:
A list of integers, containing 0 outside grains and the grain number inside a grain. (list)

number_grains_periodic()

source code 

Constructs an array with grain numbers from a mask data field treated as periodic.

Returns:
A list of integers, containing 0 outside grains and the grain number inside a grain. (list)

get_grain_sizes(grains)

source code 

Find sizes of all grains in a mask data field.

Size is the number of pixels in the grain.

The zeroth element of sizes is filled with the number of pixels not covered by the mask.

Parameters:
Returns:
Value sizes. ((list))

Since: 2.47

get_grain_bounding_boxes(grains)

source code 

Finds bounding boxes of all grains in a mask data field.

The array grains must have the same number of elements as mask_field. Normally it is obtained from a function such as DataField.number_grains().

Parameters:
  • grains - Array of grain numbers. (list)
Returns:
Value bboxes. ((list))

get_grain_bounding_boxes_periodic(grains)

source code 

Finds bounding boxes of all grains in a mask data field, assuming periodic boundary condition.

The array grains must have the same number of elements as mask_field. Normally it is obtained from a function such as DataField.number_grains().

Parameters:
  • grains - Array of grain numbers. (list)
Returns:
Value bboxes. ((list))

get_grain_inscribed_boxes(grains)

source code 

Finds maximum-area inscribed boxes of all grains in a mask data field.

The array grains must have the same number of elements as mask_field. Normally it is obtained from a function such as DataField.number_grains().

Parameters:
  • grains - Array of grain numbers. (list)
Returns:
Value bboxes. ((list))

grains_get_values(grains, quantity)

source code 

Finds a speficied quantity for all grains in a data field.

The array grains must have the same number of elements as data_field. Normally it is obtained from a function such as DataField.number_grains() for the corresponding mask.

Parameters:
  • grains - Array of grain numbers. (list)
  • quantity - The quantity to calculate, identified by GrainQuantity. Expected values: GRAIN_VALUE_PROJECTED_AREA, GRAIN_VALUE_EQUIV_SQUARE_SIDE, GRAIN_VALUE_EQUIV_DISC_RADIUS, GRAIN_VALUE_SURFACE_AREA, GRAIN_VALUE_MAXIMUM, GRAIN_VALUE_MINIMUM, GRAIN_VALUE_MEAN, GRAIN_VALUE_MEDIAN, GRAIN_VALUE_PIXEL_AREA, GRAIN_VALUE_HALF_HEIGHT_AREA, GRAIN_VALUE_FLAT_BOUNDARY_LENGTH, GRAIN_VALUE_RMS, GRAIN_VALUE_MINIMUM_BOUND_SIZE, GRAIN_VALUE_MINIMUM_BOUND_ANGLE, GRAIN_VALUE_MAXIMUM_BOUND_SIZE, GRAIN_VALUE_MAXIMUM_BOUND_ANGLE, GRAIN_VALUE_CENTER_X, GRAIN_VALUE_CENTER_Y, GRAIN_VALUE_VOLUME_0, GRAIN_VALUE_VOLUME_MIN, GRAIN_VALUE_VOLUME_LAPLACE, GRAIN_VALUE_SLOPE_THETA, GRAIN_VALUE_SLOPE_PHI, GRAIN_VALUE_BOUNDARY_MAXIMUM, GRAIN_VALUE_BOUNDARY_MINIMUM, GRAIN_VALUE_CURVATURE_CENTER_X, GRAIN_VALUE_CURVATURE_CENTER_Y, GRAIN_VALUE_CURVATURE_CENTER_Z, GRAIN_VALUE_CURVATURE1, GRAIN_VALUE_CURVATURE2, GRAIN_VALUE_CURVATURE_ANGLE1, GRAIN_VALUE_CURVATURE_ANGLE2, GRAIN_VALUE_INSCRIBED_DISC_R, GRAIN_VALUE_INSCRIBED_DISC_X, GRAIN_VALUE_INSCRIBED_DISC_Y, GRAIN_VALUE_CONVEX_HULL_AREA, GRAIN_VALUE_CIRCUMCIRCLE_R, GRAIN_VALUE_CIRCUMCIRCLE_X, GRAIN_VALUE_CIRCUMCIRCLE_Y, GRAIN_VALUE_MEAN_RADIUS, GRAIN_VALUE_EQUIV_ELLIPSE_MAJOR, GRAIN_VALUE_EQUIV_ELLIPSE_MINOR, GRAIN_VALUE_EQUIV_ELLIPSE_ANGLE, GRAIN_VALUE_MINIMUM_MARTIN_DIAMETER, GRAIN_VALUE_MINIMUM_MARTIN_ANGLE, GRAIN_VALUE_MAXIMUM_MARTIN_DIAMETER, GRAIN_VALUE_MAXIMUM_MARTIN_ANGLE. (GrainQuantity)
Returns:
Value values. ((list))

grains_get_distribution(grain_field, grains, quantity, nstats)

source code 

Calculates the distribution of a speficied grain quantity.

The array grains must have the same number of elements as data_field. Normally it is obtained from a function such as DataField.number_grains() for the corresponding mask.

Parameters:
  • grain_field - A data field representing the mask. It must have the same dimensions as the data field. (DataField)
  • grains - Array of grain numbers. (list)
  • quantity - The quantity to calculate, identified by GrainQuantity. Expected values: GRAIN_VALUE_PROJECTED_AREA, GRAIN_VALUE_EQUIV_SQUARE_SIDE, GRAIN_VALUE_EQUIV_DISC_RADIUS, GRAIN_VALUE_SURFACE_AREA, GRAIN_VALUE_MAXIMUM, GRAIN_VALUE_MINIMUM, GRAIN_VALUE_MEAN, GRAIN_VALUE_MEDIAN, GRAIN_VALUE_PIXEL_AREA, GRAIN_VALUE_HALF_HEIGHT_AREA, GRAIN_VALUE_FLAT_BOUNDARY_LENGTH, GRAIN_VALUE_RMS, GRAIN_VALUE_MINIMUM_BOUND_SIZE, GRAIN_VALUE_MINIMUM_BOUND_ANGLE, GRAIN_VALUE_MAXIMUM_BOUND_SIZE, GRAIN_VALUE_MAXIMUM_BOUND_ANGLE, GRAIN_VALUE_CENTER_X, GRAIN_VALUE_CENTER_Y, GRAIN_VALUE_VOLUME_0, GRAIN_VALUE_VOLUME_MIN, GRAIN_VALUE_VOLUME_LAPLACE, GRAIN_VALUE_SLOPE_THETA, GRAIN_VALUE_SLOPE_PHI, GRAIN_VALUE_BOUNDARY_MAXIMUM, GRAIN_VALUE_BOUNDARY_MINIMUM, GRAIN_VALUE_CURVATURE_CENTER_X, GRAIN_VALUE_CURVATURE_CENTER_Y, GRAIN_VALUE_CURVATURE_CENTER_Z, GRAIN_VALUE_CURVATURE1, GRAIN_VALUE_CURVATURE2, GRAIN_VALUE_CURVATURE_ANGLE1, GRAIN_VALUE_CURVATURE_ANGLE2, GRAIN_VALUE_INSCRIBED_DISC_R, GRAIN_VALUE_INSCRIBED_DISC_X, GRAIN_VALUE_INSCRIBED_DISC_Y, GRAIN_VALUE_CONVEX_HULL_AREA, GRAIN_VALUE_CIRCUMCIRCLE_R, GRAIN_VALUE_CIRCUMCIRCLE_X, GRAIN_VALUE_CIRCUMCIRCLE_Y, GRAIN_VALUE_MEAN_RADIUS, GRAIN_VALUE_EQUIV_ELLIPSE_MAJOR, GRAIN_VALUE_EQUIV_ELLIPSE_MINOR, GRAIN_VALUE_EQUIV_ELLIPSE_ANGLE, GRAIN_VALUE_MINIMUM_MARTIN_DIAMETER, GRAIN_VALUE_MINIMUM_MARTIN_ANGLE, GRAIN_VALUE_MAXIMUM_MARTIN_DIAMETER, GRAIN_VALUE_MAXIMUM_MARTIN_ANGLE. (GrainQuantity)
  • nstats - The number of bins in the histogram. Pass a non-positive value to determine the number of bins automatically. (int)
Returns:
The distribution as a data line. (DataLine)

duplicate()

source code 

Convenience macro doing gwy_serializable_duplicate() with all the necessary typecasting.

Use DataField.new_alike() if you don't want to copy data, only resolutions and units.

Returns:
(DataField)

get_xmeasure()

source code 

Alias for DataField.get_dx().

Returns:
(float)

get_ymeasure()

source code 

Alias for DataField.get_dy().

Returns:
(float)

get_data_pointer()

source code 

Gets pointer to data which the data field contains.

Returns:
integer pointing to the raw data of the data field (long)