Reality Field is is designed as a central point for all camera data on a virtual production set. RF is designed for narrative, cinematic use, but does have a number of features that also make it suitable for broadcast and events as well.
The basic functionality of RF is to input data, break it down into generic data types, conform those data types into common units and coordinates, and then make it available to the user. That data can be mixed and matched, further conformed, and then output.
We take a guided approach to this. Many other softwares will give the user a scripting or node system and make them build their own functionality. Instead, RF is built specifically as a camera data production tool, and so it's structured in a way that allows flexibility within specific set camera parameters.
For example, trackers have a selector for Focus data. This is not editable. However, where the user decides the focus data comes from is entirely up to them. In this way, RF is a much faster and easier to use tool compared to others on the market.
Reality Field attempts to create the maximum amount of flexibility while adding the fewest points of failure. Virtual production sets are chaotic places with a lot that can go wrong. We don't want to contribute to the complexity problem, we want to solve it.
In general, this means that RF will not have features that are redundant to the systems it communicates with. This may seem like an odd decision, as it means we're purposely withholding certain features. However, when it's 2AM the night before a shoot, you'll be happy you have one less system to check when you're troubleshooting.
As examples, RF will likely never implement the following features:
The reason is because these are all features better implemented upstream, downstream, or externally from Reality Field. Us adding these features only adds needless complexity to the set as a whole.