The Pixel Farm

The Pixel Farm manufactures and markets innovative image-processing technologies that meet the demands of professionals working in the motion picture, broadcast TV and interactive entertainment industries. Their products - which address VFX, DI and Image Restoration - are well-known and well-loved by digital artists worldwide, as they seamlessly integrate into the most demanding post-production environments, whilst supporting creativity and maximizing productivity.



PFTrack 2017
It’s here! PFTrack 2017 is possibly the biggest upgrade since the introduction of the highly acclaimed node tree in 2011. Integrating PFDepth has allowed us to create a groundbreaking next generation of PFTrack unrivalled by any other app on features, functionality and outright innovation.

All functionality of PFDepth embedded in PFTrack
All PFDepth nodes are now fully integrated and available in PFTrack
Many more ways to create and manipulate depth maps:
  • Updated Z-Depth Solver node
  • Z-Depth Tracker, Merge, Edit, Filter, Composite and Cache nodes
  • Z-Depth Object node
  • Rotoscope-based depth editing
  • Ideal tool to prepare clips for z-based compositing
Extended stereo camera and image pipeline:
  • Build Stereo Camera node to automatically position the right-eye camera after tracking the left-eye
  • Stereo Disparity Solver, Disparity Adjust and Disparity-to-Depth conversion nodes
  • Fix common issues such as stereo keystone alignment and left/right-eye colour and focus mismatches
  • Render left and right-eye images from a single clip using Z-Depth data
User Interface Updates and Productivity Enhancements
  • Node creation panel has been updated with nodes organised into groups to make them easier to find
  • New Custom node group, where commonly used nodes can be placed for quick access
  • Tree layouts can be saved as XML preset files to help quickly construct common sets of nodes
  • Tree preset XML files can be copied onto other machines or given to users to share common layouts
Extended Digital Cinematography Camera Support
  • Added support for reading ARRI RAW media files
  • Camera and lens metadata is automatically read from RED and ARRI source files
  • ARRI metadata can also be read from DPX, OpenEXR or Quicktime ProRes files
  • Added support for importing custom XML metadata to the Clip Input node
  • All metadata is passed through the tree and can be accessed by python or export nodes
Advanced Photogrammetry Texture Extraction Tools
- An optimized texture map can now be created automatically in the Photo Mesh node as part of the simplification process
- Exposure and brightness differences in the source media can be automatically corrected to provide the best quality texture map
- Exposure balanced images are automatically passed down-stream, and can be used in the Texture Extraction node for manual texture painting if required
- Normal, displacement and occlusion maps can also be generated during simplification, to ensure the simplified mesh retains as much visual fidelity as possible
  • Normal maps support both world and Mikk tangent spaces
  • Occlusion maps can be generated for either the sky or local surface occlusion
  • Additional texture maps are exported automatically by the Export node



Experimental RGBD Pipeline for Depth Sensors
- Z-Depth data captured by external sensors can be attached to an RGB clip and passed down the tracking tree
- Auto Track and User Track nodes updated to read z-depth values for trackers at each frame
- Camera Solver node will use tracker z-depth values to help solve for camera motion
  • Can reduce drift in long shots
  • Can improve accuracy when tracking complicated camera movements
  • Provides 3D data for nodal pans
  • Provides a real-world scale without any additional steps
  • - Z-Depth Mesh node can be used to convert depth maps into a coloured triangular mesh
- An iOS application will be released during 2017 allowing depth data to be recorded using an iPad and the Occipital Structure Sensor capture device

Additional Improvements and Features
  • Documentation updates and improvements
  • Added camera presets for ARRI ALEXA and RED cameras
  • Improved initial keyframe selection algorithm in Camera Solver node
  • Improved Auto Track feature tracking when using undistorted image plates containing blank edge areas
  • Custom XML metadata import can be used to define per-frame lens distortion, focal length and camera pose values
  • Cooke /i Data file import has been moved from the Edit Camera node into the Clip Input node
  • Added support for importing .PTS and FARO .XYB LIDAR files in the Survey Solver node
  • Improved pivot-point handling in the Survey Solver node when LIDAR datasets contain stray points far from the scene
  • Added an XML export python script to store camera data using the same custom XML schema as supported by the Clip Input node
  • Added depth-test and back-face culling options to the Geometry Track node to help painting vertex weights and deformable groups on complex geometric models
  • Added a focal length reset button to Camera Solver and Survey Solver nodes that can be used to reset the solved focal length to incoming value if it has been set by another node up stream
  • Added exposure and image processing controls to the Clip Input node for control of ARRI RAW and OpenEXR decoding