The general areas of the main application window are as follows:
The menu bar at the top provides access to general operations in VideoF2B. Use the File
menu to load videos for processing. The Tools menu provides useful tools and calculators.
User controls are enabled when processing Augmented-Reality videos. This
area is disabled when processing Basic videos.
The video area is the largest portion of the main window. It displays the video that is being processed.
Messages to the user are displayed in the message area below the video and user controls. Every message
is time-stamped with the local date and time of the user’s computer.
The status bar along the bottom displays the name of the loaded video file, occasional instructions, the
elapsed time in the video, and a progress bar.
The most prominent feature of VideoF2B is the drawing, or tracing, of a path behind a Control Line
aircraft in video. These traces help us to visualize the figures that a Stunt pilot performs during a flight.
There are two general ways you can use VideoF2B to produce videos: Basic, and Augmented Reality.
This is the simplest use of VideoF2B. The result is a video where the path of the aircraft is traced with a
colored line. No additional geometry is drawn.
In this mode, VideoF2B draws the traced path as well as reference geometry that includes a wireframe of the
flight hemisphere and all F2B Stunt figures of the correct shape and size per the
current FAI rules.
Before you go to the field, please read Placing the Camera to learn how to place the camera correctly in
the field for best results.
Important
DO NOT record the videos handheld!
ALWAYS mount the camera to a sturdy tripod or similar.
DO NOT move or adjust the camera setup while recording a flight.
To produce Basic videos of flights, you only need a suitable camera and a
tripod.
TODO: photo of an example setup for basic video.
To produce AR videos, more effort may be required. If your flying site already
has FAI F2B markers installed around the flight circle, then the surveying work is already done. Just
measure the distance from circle center to the markers. The elevation of the markers above the circle center
should be 1.5 meters in that case.
TODO: photo of an example setup for AR video.
If your field does not have F2B markers, you can install them yourself with some specialized equipment. You
will need at least a self-leveling rotary laser system and a laser distance measuring tool.
TODO: perhaps a separate chapter on how to DIY field markers, the recommended layout tools and
technique, etc.???
The recommended dimensions and placement of markers are described in Annex 4F, Appendix II of the FAI
Sporting Code.
Camera placement is important for capturing quality video of a Stunt flight. While it is acceptable to place
the camera farther than recommended and capturing the entire flight hemisphere, it is generally
preferable to capture as much of the core of maneuver space as possible by placing the camera closer.
This approach sacrifices the outer edges of the base; but takeoff, level/inverted flight, and landing
maneuvers are not easy to evaluate in video anyway. The other maneuvers should be the primary focus of video
recording.
Select a location outside the flight circle upwind of the expected maneuvers. This is typically
where the contest judges stand. The correct distance of the camera from the center of the circle depends
on the focal length of your optical system. Details are discussed below.
Deploy your tripod at the selected location. Weigh it down if possible so that it remains stable.
Mount the camera on the tripod. The result video must be in landscape orientation. If using a mobile
device, this means that you must orient the device horizontally.
Adjust the tripod so that the camera height is approximately between 1.0—1.5 m (about 3—5 ft).
Turn on the camera and make sure it is in video mode. In photo mode the aspect ratio of the image frame
will likely be different from that of the video, resulting in incorrect alignment.
Point the camera approximately at the center of the circle.
Tilt the camera upward so that there is a visible margin between the pilot’s feet at the center of the
circle and the bottom edge of the frame. During this adjustment make sure that the camera is
level. Most modern cameras have a built-in leveling guide—take advantage of it.
Pan the camera so that the frame’s vertical centerline aligns with the center of the flight circle.
When the above steps are followed, you will find that the top of the flight hemisphere is near the top of the
frame in your AR videos. You will also generally find that the center of the frame points somewhat above the
45° elevation at the far side of the hemisphere. This is usually the desired outcome.
When you don’t know the focal length of your camera system, the camera distance must be determined by trial
and error in the field. However, if you know your system’s focal length, we recommend that you use this
Field of View calculator to determine your camera
system’s angle of view. Look for the value labeled “Height” in degrees, in the section “Angle of View”:
If documentation for your lens is available, verify that your result is reasonably close to the
manufacturer’s listed specifications.
VideoF2B includes a calculator for estimating the camera distance from circle center that will provide the
best video coverage. To use it, choose Tools ‣ Place camera.. in the main menu:
Hover the mouse cursor over the values in the tables for detailed explanations of each value.
Enter the input values to the best of your knowledge:
The flight radius R is the distance from the pilot’s chest to the centerline of the aircraft.
The camera height C is relative to the flight base. For example, if the camera is 1 m above the pilot’s
feet, then C=-0.5.
Ground level G is also relative to the flight base. Under F2B rules, this value in meters is -1.50
and there should be no reason to adjust it.
Camera FOV angle A is the maximum vertical angle of view of your camera system as determined above.
As you adjust each input value, the values in the Results table will update accordingly. The values of
interest are in the row labeled Camera distance. These numbers represent the range of recommended
distances for the camera. Place the camera within this range for best results.
Danger
Please be aware that the outboard wing of the aircraft extends outside the flight hemisphere, and the
pilot never stays exactly in the center of the circle during a flight. Do not place the camera too close
to the flight radius even when the calculated “nearest” distance value is very close to R!
Hint
You may use any suitable distance units for values of R, C, and G, just stay consistent. The
default values are in meters. All angular values are always in degrees.
Important
For safety reasons, the calculator does not allow the camera inside the flight hemisphere. That is,
the calculated “nearest” value of “camera distance” should never be less than the flight radius R.
If you encounter a calculation where this is not true, please submit a bug report with your input values.
With the above precautions in mind, you are ready to produce Basic or
Augmented-Reality videos.
For the technically inclined…
There are two criteria for camera placement.
The first may be obvious—the center of the flight circle must be visible in the FOV so that users may
select it during AR processing. This is shown in the calculator diagram by extending the bottom of the
FOV angle A to the point on the ground at the pilot’s feet.
The second criterion may not be immediately obvious. It is based on two facts:
The “camera cone” formed by the camera’s angle of view separates the AR hemisphere into two parts: the
“near” and the “far” volume. Image space is represented by integers, resulting in a “dead zone”
between the two volumes where the aircraft’s location cannot be determined. Whenever the aircraft
passes through this zone, the motion trace generated by VideoF2B “jumps” across the boundary without
any information between the two points. Note that this information is irrelevant during AR processing,
but it is vitally important during 3D tracking.
The Overhead Eight maneuver is critically close to the “dead zone”. To minimize the chances of the
aircraft passing across this boundary during the overhead eight, the calculator ensures that the point
labeled as “Tangent elevation” on the diagram is never above the 45° elevation of the flight
hemisphere. This criterion enforces a visible gap in video between the circle of 45° elevation (drawn
in bright green) and the visible edge of the flight hemisphere (drawn in magenta):
TODO: an example AR sphere due to a badly placed camera (too far from circle) that results in loss of
“gap”.
A Basic video contains a colored trace of the path of the aircraft. No additional geometry is drawn.
Here is an example:
To produce a Basic video, follow these steps:
Record a Stunt flight using a video camera. For guidelines on how to position the camera in the field, see
Placing the Camera. Save the video file to your computer.
Start the VideoF2B application. The main window looks like this when the application starts:
Click the Browse for file button of the Video source box:
The “Load a Flight” dialog window. Just choose your video file from here.
Choose the desired video on your computer and click the Open button.
File browsing dialog. This may look different on your computer.
Click the Load button or just press the Enter key. The video will begin processing in
the main window.
The trace behind the aircraft grows up to 15 seconds long. During processing, you can clear the trace at
any time by pressing the Space bar.
If you wish to stop processing the video for any reason before VideoF2B finishes tracing it, press the
Esc key on the keyboard. This will stop the tracing, and the result will be a partially processed
video.
When finished, you will find the traced video file in the same location as the original video. The traced
video will have the same name as the original, but with a _out suffix. For example, if your original
video is named Flight1.mp4, the traced video will be named Flight1_out.mp4.
An Augmented-Reality (AR) video contains overlays of various reference graphics on
top of the original video footage. In addition to the motion trace behind the model aircraft, the AR graphics
may include the following:
A wireframe representation of the flight hemisphere, which includes:
Record a Stunt flight using a video camera. For guidelines on how to position the camera in the field, see
Placing the Camera. Save the video file to your computer.
Process the flight video. See User Controls for guidance on manipulation of AR graphics.
When finished, you will find the processed AR video file in the same location as the original video. The AR
video will have the same name as the original, but with a _out suffix. For example, if your original
video is named Flight1.mp4, the traced video will be named Flight1_out.mp4.
Before you can produce Augmented-Reality videos, you must calibrate your camera
system. Camera calibration accomplishes two things in one step. First, it calculates distortion parameters of
the camera’s optical system. This allows the processor in VideoF2B to “undistort” every video frame so that
straight lines in the real world remain straight in video. Undistorted frames are essential to many image
processing tasks. Second, it establishes a relationship between the size of objects in video versus the size
of the same objects in the real world. This is important for drawing Augmented-Reality geometry of the correct
size and shape in the video.
Calibration involves the recording of a special video and consists of three easy steps. To begin, start
VideoF2B and choose Tools ‣ Calibrate camera.. in the main menu. You will see the following
window:
You have two choices for the calibration pattern: display it on screen or print it to paper. The
recommended method is to print. However, if you do not have access to a printer, displaying it on screen is
also acceptable.
Important
To print the pattern you will need a PDF reader application, such as Adobe Acrobat, Foxit, or similar.
If you decide to print the pattern, make sure to mount it flat to a suitable piece of cardboard or
poster board for easy handling while maintaining accuracy.
Note
The absolute size of the pattern is not important. Whether you display it or print it, do not worry about
its true size. It is only important that the entire pattern is flat and visible.
Record the video using your camera system. The video should be fairly short; about 30-50 seconds is enough.
The pattern should be visible in its entirety throughout the video. Move and tilt the camera so that you
record as many perspectives of the pattern as possible. To see an example video, click the thumbnail under
Step 2 in the calibration window. An alternative method is to mount the camera on a tripod, then move and
tilt the printed pattern in front of the camera.
Attention
Configure your camera with the same video settings that you will use in the field to record the
flights. This means that your choice of lens, its focal length, and video resolution all
must be the same during calibration and during field recordings. If the focal length is adjustable
(also known as a “zoom lens”), then you must make sure to set the focal length to the same value during
field recordings as you did during calibration. When using the camera of a mobile device, always orient
the device in landscape mode (horizontally) and make sure you always choose the same zoom factor
and video resolution as you did during calibration. If you neglect to follow this rule, you will get
unexpected results in your Augmented-Reality videos. This rule does not
apply to the frame rate of the video.
If you chose the Display option for the pattern in Step 1, press the Esc key to return to the
calibration window after recording the video.
Transfer the video file to your computer. Under Step 3, browse to the file. Finally, press the
Start button at the bottom of the window. VideoF2B will begin processing the calibration video in
the main window:
Main window of VideoF2B showing camera calibration in progress.
As stated in the message window, the calibration process takes a while. The video playback will appear in slow
motion, and it will seem to “skip” and “freeze” at times, but do not fret – all is well. The calibration
process is computationally intensive. If you do want to stop the calibration at any time for any reason, just
press the Esc key. Otherwise, grab a cup of coffee, relax, and wait patiently until the progress bar
reaches 100%. When finished, the video will disappear from the main window, and you will see some information
about the results in the message window:
Main window at end of camera calibration. Take note of the messages in the message window.
If the calibration fails, most likely your video is too short and/or it does not show the complete pattern
from a sufficient number of points of view. In that case, record another video while paying attention to those
details.
If the calibration succeeds, VideoF2B will create a file named CamCalibration.npz and two image files in
the same folder as the calibration video. The CamCalibration.npz file is the calibration file for your
camera system. Do not lose it. You will need it for producing every Augmented-Reality video of the flights
you will record with your camera. You may also share it with others who have the same camera system as you.
For the technically inclined…
The two image files show a sample frame from the calibration video. The image calibresult_nocrop.png
is a full-size frame that is “undistorted”, i.e., straight lines of the pattern should appear straight in
the image. To achieve this, the calibration process transforms the original frame in such a way that empty
pixels appear around the edges of the undistorted image, giving the edges a “pincushion” look:
The strength of the pincushion effect depends mostly on the distortion inherent to the lens, and on the
focal length. Wide-angle action cameras typically show a stronger effect than longer lenses.
The other image file is calibresult.png. It is the same image as the “no-crop” image above, with one
important difference. It is cropped to the maximum usable area so that the empty pixels are no longer
visible:
Note that this always results in a smaller image than the full-size video frame that you see in the
camera. In the above examples, the “no-crop” image size is the original Full HD, or 1920x1080 pixels. The
cropped image size is 1910x1050 pixels. So a total of 10 pixels were lost from the sides, and a total of
30 pixels from the top and bottom of the original frame. It is important to keep this in mind when placing
the camera in the field. Give yourself some room, especially at the bottom of the frame, to account for
the lost pixels. VideoF2B will “upsize” calibrated video to the size of the original input video whenever
possible, but some pixels around the border of the original video will be lost due to calibration.
Congratulations, you are ready to record Control Line Stunt videos! The next
step is field setup.
Be sure to choose the calibration file that corresponds to the camera and lens that were used to
record the video you selected above.
Note
If F2B markers are not available, but you still want to create video that corrects for camera
distortion (using your camera’s calibration file), turn on the option Skip camera
location. Note that in this case, entry of AR-related parameters is disabled and Augmented-Reality
graphics will not be drawn.
Enter the following AR-related parameters:
Flight radius (m)
The flight radius of the recorded flight, in meters.
Height markers: distance to center (m)
The horizontal distance from the center of the flight circle to the F2B markers, in meters.
Height markers: height above center of circle (m)
The elevation of F2B markers above the pilot’s feet at the center of the flight circle, in meters.
Important
Please use meters for the above three parameters.
Click the Load button or just press the Enter key.
After you load a flight for AR processing, the AR graphics can only be drawn in video after the flight has
been located. This is done via the locating procedure described below. This procedure establishes a
relationship between the real 3D world and the 2D video that was used to record it. The procedure consists of picking objects in video that are positioned at known locations in the real world.
To locate a flight, follow these steps after loading it:
The video window will display the first frame of your video so that you can select F2B markers. This
procedure locates the camera in video relative to the flight circle so that AR geometry can be displayed.
First step of camera locating: begin selecting markers.
Follow the prompts in the middle of the status bar to select markers. Be as accurate as possible when
selecting each marker.
To select a marker, point the mouse cursor to it and click the left mouse button.
To unselect the last selected marker, click the right mouse button anywhere in the video window.
You will be prompted to select the following four items:
Circle center
Select a point on the ground in the center of the circle. If you know that the pilot is standing
exactly in the center at the start of the video, select a point at his or her feet. If the pilot is
not standing in the center of the pilot circle at the start of the video, select a point on the ground
where you estimate the center of the pilot circle to be. This can be done by reviewing the video
separately in a video player. Fast-forwarding the video to a time when the pilot is in the middle of a
maneuver is the recommended method of estimating the location of the circle center.
Front marker
Select the center of a marker on the far side of the flight circle that is nearest to the middle of
the video frame. It does not matter which marker you choose to be the front, as long as markers
adjacent to it are visible in the video frame.
Left marker
Select the center of the nearest marker to the left of the front marker on the far side of the
flight circle, i.e., the next marker in the counterclockwise direction.
Right marker
Select the center of the nearest marker to the right of the front marker on the far side of the
flight circle, i.e., the next marker in the clockwise direction.
Camera locating in progress. Center, front, and left markers have been selected in this example.
When you select a marker, VideoF2B draws a small green circle around the selected point. Here is an example
of all four markers after selection:
When you select the final marker, you will see this prompt:
Confirmation prompt at end of camera locating procedure.
If you made incorrect selections, click No. The current marker selections will be cleared,
and you will have a chance to select all of them again.
If you are satisfied with your selections, click Yes. Processing will begin.
The flight locating procedure establishes a World Coordinate System (WCS) based on
the selected markers. The WCS is a right-handedCartesian coordinate system. VideoF2B uses the WCS to
draw all AR graphics correctly in video. The position and orientation of the WCS is as follows:
Origin is at the center of the base. Thus, its elevation is 1.5 m above the pilot circle.
Positive Y-axis passes through the front marker.
Positive Z-axis points vertically upward, and passes through top of circle.
Positive X-axis is perpendicular to both Y and Z axes, and generally points to the right in video.
TODO: replace the photo below with a field photo that shows:
all four markers and the camera on a tripod outside the circle.
VideoF2B reads frames from a given video in strict sequence from beginning to end. The user interface is
designed for an efficient workflow via the keyboard alone, while some of those controls are also available in
the main window, as seen above.
There is no fast-forward or rewind functionality. However, you can pause the processing at any time by
pressing P on the keyboard or clicking the button. While processing is paused,
you can perform various manipulations of AR geometry, taking as much time as you need. When ready to continue,
press P again or click the button.
To clear the trace behind the model aircraft, press Space. This is useful for presenting clear traces
of maneuvers — clear the trace shortly before an upcoming maneuver to present the traced maneuver clearly.
To account for the pilot’s movement in the pilot circle during a flight, use the WASD keys to move the flight hemisphere in the world XY plane. The
keys operate as follows:
W moves the hemisphere in +Y direction (forward, away from the camera).
S moves the hemisphere in -Y direction (backward, toward the camera).
A moves the hemisphere in -X direction (to the left).
D moves the hemisphere in +X direction (to the right).
Every stroke of the above keys moves the hemisphere in the commanded direction by 0.1 m.
Pressing X resets the hemisphere’s center to the origin of the World Coordinate System.
The flight radius (R) is always displayed in the bottom left corner of AR videos. Additionally, when the AR
hemisphere’s center (C) is not at the origin, its XYZ offset will be displayed next to the flight radius:
Sphere information in AR video. All dimensions are in meters.
Tip
Compensating for the pilot’s off-center displacement
Position of the pilot along the X-axis is easy to match accurately. Position along the Y-axis is more
difficult to estimate because depth is difficult to gauge in video. Take advantage of the Reverse
Wingover maneuver to assess the pilot’s initial position. You will be able to adjust the hemisphere’s
position so that the aircraft’s centerline crosses the visible edge of the sphere, while keeping the
hemisphere’s center on the pilot. As the flight proceeds, use your best judgment. Other maneuvers whose
approaches cross the visible edge of the hemisphere above the base (entry and exit of Outside Square
Loops and entry of Overhead Eight) also help to correct for the pilot’s position along the Y-axis
throughout the flight.
To match the nominal figure to the maneuver flown by the pilot, use the arrow keys to rotate the
hemisphere. The keys operate as follows:
LeftArrow rotates the AR hemisphere counterclockwise on its vertical axis (i.e., the nominal
figure moves to the left as seen by the pilot).
RightArrow rotates the AR hemisphere clockwise on its vertical axis (i.e., the nominal figure
moves to the right as seen by the pilot).
Every stroke of these arrow keys rotates the hemisphere in the commanded direction by 0.5°.
To toggle the display of any nominal figure, click its corresponding checkbox in the user controls. You can
also use the DownArrow key or the button to advance to the next figure in
the Stunt Pattern sequence. If no figures are selected in the controls, the advancing function will select
loops. If one figure is selected, the advance function will unselect the current figure and select the next
figure in the sequence. If the current figure is the four-leaf clover, the figure selection will remain and
the advancing function will not have any effect. If more than one figure is selected, the advancing function
will likewise have no effect.
Note
Any combination of nominal figures can be displayed, even if only for
training and/or demonstration purposes.
Every maneuver has a start and an end point for judging purposes, as defined by the FAI F2B Rules. To toggle
the display of start and end points on the displayed nominal figure(s), click the Draw Start/End points
checkbox at any time during AR processing:
This controls the display of start/end points in displayed figure(s).
The start point is displayed in green , and the end point
is displayed in red .
VideoF2B can optionally display diagnostic points. These are just visual aids for presentation. They are
defined as endpoints of the arcs that make up a figure. In simple loops, they’re at the bottom of the loop.
In more complex figures, diagnostic points help to visualize where the connections between the “straight”
segments and the corners or loops of the figure are located.
To toggle the display of diagnostic points on the displayed nominal figure(s), click the Draw Diagnostics checkbox at any time during AR processing:
This controls the display of diagnostic points in displayed figure(s).
Diagnostic points are displayed in alternating green and red colors per figure. For example, this is the
square horizontal eight with diagnostics displayed:
A:Absolutely not! Mount your recording device to a sturdy tripod or similar. Do not move it or
adjust it while recording a flight. See Field setup for details.
Q: Why do I need markers?
A: F2B markers provide a base reference in the field for the pilot and for the judges. For VideoF2B, they
also relate the real world to camera images, so that augmented reality geometry can be drawn accurately in
video. Without the markers, augmented-reality geometry is not possible.
Q: My flying site does not have F2B markers, and installing them is not practical. Is there an alternative
method for creating AR videos that does not require markers?
A: This capability is a research & development project that is currently in progress.
Q: Can I move the camera during a flight once recording has started?
A: No. Doing so will require you to select the markers in video in the new camera location.
Q: Does the trace drawn in video follow the CG of the model aircraft?
A: Not necessarily, but it tends to be fairly close to it. The motion detector follows the centroid (geometric center) of the silhouette of the largest moving object
in video.
Q: Why are background objects tracked instead of the model aircraft?
A: VideoF2B currently does not distinguish between objects and just follows the largest moving object. If
possible, avoid having moving objects as background (e.g. a road).
Q: How accurate are the Augmented Reality graphics?
A: Field tests have proven that the augmented-reality geometry drawn in video is accurate within 10 cm
throughout the entire flight envelope.
Q: Can this system be used for computerized scoring of Stunt flights?
A: The concept of tracking the flights in three dimensions using recorded video for the purpose of
automated scoring is definitely under consideration. Some maneuvers are problematic to track accurately
(takeoff, level/inverted flight, landing), but the majority of the flight maneuvers are potential candidates.
An enhanced version of reality created by the use of technology to overlay digital information on an
image of something being viewed through a device. See Augmented Reality.
A shape, which makes up a separately recognizable complete part of a whole maneuver. For
example, the first loop of the three consecutive inside loops maneuver is referred to as a figure; but
the first loop which makes the first half of the first complete figure eight in the two consecutive
overhead eight maneuver is not referred to as a figure.
Condition when the model aircraft is flying in an attitude which is the reverse of upright
flight (colloquially, the model aircraft is “flying on its back”, is “flying upside-down”, or is
“flying inverted”).
An imaginary line drawn at right angles (90 degrees) to the horizontal and is used as a
reference line when flying and scoring the size, positioning, symmetry and the superimposition of
various figures and maneuvers.
The full total of figures and segments necessary to complete the
maneuver marked under a separate numbered heading with bold type. For example, the take-off maneuver,
the three consecutive inside loops maneuver, and the single four-leaf clover maneuver, are all
referred to as a single whole maneuver throughout F2B Rules.
Used throughout F2B Rules in their original dictionary definition sense (that is: something that lasts
only for a very brief period of time). So, for example, the very short period during which the model
aircraft is required to be in a vertically-banked “knife-edge” attitude above the competitor’s head
during the two consecutive overhead eights maneuver is described in F2B Rules as “momentarily”.
A dimension, shape, or size that represents the true profile of a figure or maneuver,
and is used as the template against which actual shapes and sizes of figures or maneuvers are
compared.
A specifically defined part of a figure (or of a whole maneuver) in which certain
particular points are detailed. For example, the first loop which makes the first half of the first
complete figure eight in the two consecutive overhead eight maneuver is referred to as a segment.
params – A tuple of tuples with the position
and the keyword to replace
Returns:
The modified positional and keyword arguments
Return type:
tuple[tuple, dict]
Usage:
Given a method with the following signature,
assume we want to apply the str function to arg2:
def method(arg1=None, arg2=None, arg3=None)
Since arg2 can be specified positionally as the
second argument (1 with a zero index) or as a keyword,
we would call this function as follows:
`replace_params(args, kwargs, ((1, ‘arg2’, str),))
This is especially useful because constructing a Path object
with an empty string causes the Path object to point to the
current working directory, which is not desirable.
Parameters:
string (str) – The string to convert
Returns:
None if string is empty,
or a Path object representation of string
Simple wrapper around QSettings.
Contains core definitions of all known keys and their default values.
Does not contain a strategy for versioning of settings.
Handles lookup of default values in the most basic manner.
Return the path of the bundle directory.
When frozen as a one-file app, this is the _MEI### dir in temp.
When frozen as a one-dir app, this is that dir.
When running as a script, this is the project’s root dir.
Get User-friendly names and versions of libraries
that we care about for bug reports. This is just a sub-list
of install_requires items in our setup.cfg.
The limits (min, max) of the elevation of the tangent point from camera to sphere
at the respective cam_distance_limits.
The point’s elevation is measured from flight center in degrees.
Base class for plotting primitives.
* Call draw() to draw the Plot instance in your image.
kwargs:
size: the line thickness or point radius.
color: the color of the primitives in this Plot.
is_fixed: bool indicating whether this Plot is fixed in object space or not.
If True, world transforms do not affect the object coordinates.
If False (default), then world transforms will
rotate, scale, and translate the object coordinates according
to the rules in the _calculate() method.
On a sphere of radius R, a fillet is defined as an arc of a small circle
of radius r between two great arcs of the sphere with angle psi
(\(\psi\)) between the arcs such that the small circle is tangent to
both arcs. The constructor of this class tries to calculate the parameters
of the fillet via the calculate() method.
Parameters:
R (float) – radius of the sphere.
r (float) – radius of the fillet.
psi (float) – angle between two great arcs that define the fillet.
is_degrees (Optional[bool], default: False) – If True, psi is given in degrees, otherwise it is
given in radians.
Given two intersecting planes with angle \(\psi\) between them, a
cone of slant height \(R\) and base radius \(r\) whose apex
rests on the intersection line of the planes will rest tangent to both
planes when the cone’s axis makes an angle \(\theta\) with the
intersection line of the planes in the bisecting plane.
If we set up a coordinate system on the cone’s base such that:
:rtype: None
the origin is at the cone’s apex;
the \(-z\) axis is along the cone’s axis toward the cone’s base;
and
the \(+x\) axis is toward the intersection line,
then the coordinates of the points of tangency between the cone and the
planes are \((x_p, y_p, -d)\) and \((x_p, -y_p, -d)\).
Let \(\beta\) be the central angle of the arc along the cone’s base
that joins the two tangency points on the side of the intersection (the
shorter of the two possible arcs).
Perform the following:
Ensure that tangency is possible. This requires that
\[2 \alpha \leqslant \psi \leqslant \pi\]
where \(\alpha = \arcsin\left(\dfrac{r}{R}\right)\) is the
half-angle of the cone’s apex.
If this condition fails, the instance attribute is_valid is set to
False and this method returns early.
Find angle \(\theta\) (Gorjanc solution). Store it in the instance
attribute theta.
Find \(x_p\), \(y_p\), and \(d\). Store them in instance
attributes x_p, y_p, and d, respectively.
Find angle \(\beta\). Store it in the instance attribute beta.
Calculate the basic parameters of a triangular loop figure on a sphere.
Given an equilateral spherical triangle on the surface of a sphere of radius
R such that the top of a corner turn of radius r is located at
target_elev on the sphere, calculate:
The central angle \(\sigma\) of the side of the triangle,
The angle \(\phi\) between adjacent sides of the triangle.
Parameters:
R (float) – radius of the sphere.
r (float) – radius of the loop’s corner turns.
target_elev (Optional[float], default: 0.7853981633974483) – elevation of the highest point in the top turn. Defaults
to \(\dfrac{\pi}{4}\).
Convert a point from Cartesian coordinates to elevation-based spherical
coordinates.
Parameters:
p (Union[_SupportsArray[dtype], _NestedSequence[_SupportsArray[dtype]], bool, int, float, complex, str, bytes, _NestedSequence[Union[bool, int, float, complex, str, bytes]]]) – an array or sequence representing a point (x,y,z) in
Cartesian space.
Create an array of 3D points that represent a circular arc.
Return 3D points for an arc of radius r and included angle alpha
with point density rho, where rho is the number of points per
\(2\pi\). Arc center is (0,0,0). The arc lies in the \(xy\)
plane. The arc starts at zero angle, i.e., at (r,0,0), and proceeds
counterclockwise until it ends at alpha. Angle measurements are in
radians. The endpoint is always included.
Parameters:
r (float) – radius of the arc.
alpha (float) – included angle of the arc in radians.
rho (Optional[int], default: 100) – angular density of generated points. Defaults to 100.
Return type:
ndarray
Returns:
(N,3) array of points, where N>=3 and is proportional to
alpha and rho.
Warning
The meaning of the rho parameter may change in the future
from angular density to circumferential (linear) density to provide more
consistent point spacing on arcs of different radii in the same scene.
Calculate the properties of a cone rotated from the flight base to a certain
elevation.
Consider a base cone whose axis lies in the \(xy\) plane, and whose
ruled surface contains the \(y\) axis. Rotate this cone around the
\(x\) axis by an angle \(\beta\) such that the elevation of the
cone’s axis is at angle \(\theta\). This result is important because it
preserves the cone’s tangency point with the \(yz\) plane after the
rotation. In the cone’s base plane, a line segment from the cone axis to
this point of tangency lies in the \(xy\) plane when the cone is
unrotated (the “base” cone). After rotation of the cone by \(\beta\),
this same line segment is no longer parallel to the \(xy\) plane.
Effectively, it has been rotated around the cone’s axis by an angle that we
hereby call \(\delta\). The goal is to calculate \(\delta\) and one
of the other angles such that the caller has all three angles \(\delta\),
\(\theta\), and \(\beta\) at its disposal.
Parameters:
alpha (float) – the cone’s half-aperture, in radians. This is required.
theta (Optional[float], default: None) – the elevation of the cone’s axis, in radians.
beta (Optional[float], default: None) – the rotation of the cone around the \(x\) axis from the
base, in radians.
Raises:
ArgumentError – when theta and beta arguments are supplied
inconsistently, i.e., when both are given or neither is given.
Return type:
Tuple[float]
Returns:
angle \(\delta\) and one of the missing angles \(\theta\)
or \(\beta\). Two mutually exclusive cases are possible:
theta is known (as in top corners of square loops)
Calculate the height of an equilateral spherical triangle as a function of
its side angle \(\sigma\). Takes advantage of the cosine rule in
spherical trigonometry.
Pause/Resume processing at the current frame. Allows the following
functionality with immediate feedback while paused:
* To quit processing.
* To clear the track.
* To manipulate sphere rotation and movement.
Overridden to handle the closing of the main window in a safe manner.
Handles all exit/close/quit requests here, ensuring all threads are stopped
before we close.
Advance the current figure checkbox to next figure if appropriate.
Behavior:
If multiple figures are checked, do nothing.
If the last figure is checked, do nothing.
If no figures are checked, check the first one.
In all other cases, uncheck the current figure and check the next one.
Handle return codes and any possible exceptions that are reported
by VideoProcessor when its processing loop finishes.
Also update the UI as appropriate.
Add an exception hook so that any uncaught exceptions
are displayed in this window rather than someplace where
users cannot see it and cannot report when we encounter these problems.
Parameters:
exc_type – The class of exception.
value – The actual exception object.
traceback – A traceback object with the details of where the exception occurred.
This documentation uses reStructured Text (reST) as its markup language. The structure and syntax borrow
shamelessly from the excellent Inkscape Beginners’ Guide. Please refer to the Sample Chapter for an outline of the syntax and
most of the elements used in this guide.
Note
Only some of the custom styling used in the Inkscape guide applies to this guide. When in doubt,
refer to files in the static folder of this documentation for implementation details. Edit them as
necessary to suit our styling needs.