Data Arithmetic module enables to perform arbitrary point-wise operations on a single data field or on the corresponding points of several data fields (currently up to eight). And although it is not its primary function it can be also used as a calculator with immediate expression evaluation if a plain numerical expression is entered. The expression syntax is described in section Expressions.
The expression can contain the following variables representing values from the individual input data fields:
Data value at the pixel. The value is in base physical units,
e.g. for height of 233 nm, the value of |
|Mask value at the pixel. The mask value is either 0 (for unmasked pixels) or 1 (for masked pixels). The mask variables can be used also if no mask is present; the value is then 0 for all pixels.|
|Horizontal derivative at the pixel. Again, the value is in physical units. The derivative is calculated as standard symmetrical derivative, except in edge pixels where one-side derivative is taken.|
|Vertical derivative at the pixel, defined similarly to the horizontal derivative.|
|Horizontal coordinate of the pixel (in real units). It is the same in all fields due to the compatibility requirement (see below).|
|Vertical coordinate of the pixel (in real units). It is the same in all fields due to the compatibility requirement (see below).|
In addition, the constant π is available
and can be typed either as π or
All data fields that appear in the expression have to be compatible. This means their dimensions (both pixel and physical) have to be identical. Other data fields, i.e. those not actually entering the expression, are irrelevant. The result is always put into a newly created data field in the current file (which may be different from the files of all operands).
Since the evaluator does not automatically infer the correct physical units of the result the units have to be explicitly specified. This can be done by two means: either by selecting a data field that has the same value units as the result should have, or by choosing option Specify units and typing the units manually.
The following table lists several simple expression examples:
|Value inversion. The result is very similar to Invert Value, except that Invert Value reflects about the mean value while here we simply change all values to negative.|
|Squared difference between two data fields.|
|Modification of values under mask. Specifically, the value 10-8 is added to all masked pixels.|
|Combination of two data fields. Pixels are taken either from data field 1 or 2, depending on the mask on field 3.|
In the calculator mode the expression is immediately evaluated as it is typed and the result is displayed below Expression entry. No special action is necessary to switch between data field expressions and calculator: expressions containing only numeric quantities are immediately evaluated, expressions referring to data fields are used to calculate a new data field. The preview showing the result of an operation with fields is not immediately updated as you type; you can update it either by clicking Enter in the expression entry.or just pressing
Immerse insets a detailed, high-resolution image into a larger image. The image the function was run on forms the large, base image.
The detail can be positioned manually on the large image with mouse. Buttoncan then be used to find the exact coordinates in the neighbourhood of the current position that give the maximum correlation between the detail and the large image. Or the best-match position can be searched through the whole image with .
It should be noted that correlation search is insensitive to value scales and offsets, therefore the automated matching is based solely on data features, absolute heights play no role.
Result Sampling controls the size and resolution of the result image:
Detail Leveling selects the transform of the z values of the detail:
Images that form parts of a larger image can be merged together with Merge. The image the function was run on corresponds to the base image, the image selected with Merge with represents the second operand. The side of the base image the second one will be attached to is controlled with Put second operand selector.
If the images match perfectly, they can be simply placed side by side with no adjustments. This behaviour is selected by option None of alignment control Align second operand.
However, usually adjustments are necessary. If the images are of the same size and aligned in direction pependicular to the merging direction the only degree of freedom is possible overlap. The Join aligment method can be used in this case. Unlike in the correlation search described below, the absolute data values are matched. This makes this option suitable for merging even very slowly varying images provided their absolute height values are well defined.
Option Correlation selects automated alignment by correlation-based search of the best match. The search is performed both in the direction parallel to the attaching side and in the perpendicular direction. If a parallel shift is present, the result is expanded to contain both images fully (with undefined data filled with a background value).
Option Boundary treatment is useful only for the latter case of imperfectly aligned images. It controls the treatment of overlapping areas in the source images:
Stitching is an alternative to the merge module described above. It is mainly useful when the relative positions of the image parts are known exactly because the positions entered numerically. They are initialised using image offsets, so if these are correct, the stitched image is formed automatically. Buttonsfor each part revert manually modified offsets to the initial ones.
Two slightly different images of the same area (for example, before and after some treatment) can be croped to intersecting area (or non-intersecting parts can be removed) with this module.
Intersecting part is determined by correlation of larger image with center area of smaller image. Images resolution (pixels per linear unit) should be equal.
The only parameter now is Select second operand - correlation between it and current image will be calculated and both data fields will be cropped to remove non-intersecting near-border parts.
This module finds local correlations between details on two different images. As an ideal output, the shift of every pixel on the first image as seen on the second image is returned. This can be used for determining local changes on the surface while imaged twice (shifts can be for example due to some sample deformation or microscope malfunction).
For every pixel on the first operand (actual window), the module takes its neighbourhood and searches for the best correlation in the second operand within defined area. The position of the correlation maximum is used to set up the value of shift for the mentioned pixel on the first operand.
This module searches for a given detail within the image base. It can produce a correlation score image or mark resulting detail position using a mask on the base image.
Convolution of two images can be performed using Convolve (see Convolution Filter for convolution with a small kernel, entered numerically). The module has only a few options:
Neural network processing can be used to calculate one kind of data from another even if the formula or relation between them is not explicitly known. The relation is built into the network implicitly by a process called training which employs pairs of known input and output data, usually called model and signal. In this process, the network is optimised to reproduce as well as possible the signal from model. A trained network can then be used to process model data for which the output signal is not available and obtain – usually somewhat approximately – what the signal would look like. Another possible application is the approximation of data processing methods that are exact but very time-consuming. In this case the signal is the output of the exact method and the network is trained to reproduce that.
Since training and application are two disparate steps they are present as two different functions in Gwyddion.
The main functions that control the training process are contained in tab Training:
Neural network parameters can be modified in tab Parameters. Changing either the window dimensions or the number of hidden nodes means the network is reinitialized (as if you pressed ).
Trained neural network can be saved, loaded to be retrained on different data, etc. The network list management is similar to raw file presets.
In addition to the networks in the list, there is one more unnamed network and that of the network currently in training. When you load a network the network in training becomes a copy of the loaded network. Training then does not change the named networks; to save the network after training (under existing or new name) you must explicitly use.
Application of a trained neural network is simple: just choose one from the list and press “In training”.. The unnamed network currently in training is also present in the list under the label
Since neural networks process and produce normalised data, it does not perserve proportionality well, especially if the scale of training model differs considerably from the scale of real inputs. If the output is expected to scale with input you can enable option Scale proportionally to input that scales the output with the inverse ratio of actual and training input data ranges.