FAME™ has provided the intelligence, surveillance and reconnaissance (ISR) community with the end-to-end capability to ingest, annotate, store and disseminate motion imagery using technologies and techniques adapted from the broadcast industry. The newest release of the architecture, FAME 3.0, expands on previous versions by adding important new capabilities, including virtualization of FAME’s server-side components, enterprise services that federate motion imagery data across many locations, a new Web-based client, a geospatially enabled multiviewer, and a Video Exploitation Processor that conditions, normalizes and corrects incoming motion imagery streams.
The system ingests H.264 or MPEG-2 transport streams in digital (SD and HD) formats. Metadata within the stream is retained with the ingested stream and stored in the FAME database for future searching/cataloging purposes. The stored information is Motion Imagery Standards Board (MISB)-compliant and includes geospatial and mission-critical information, greatly augmenting the ability to discover relevant video clips. Temporal and geospatial searches may be filtered further using mission or external metadata. Geospatial markers within the video are inserted into the database as quickly as they are received, making the data searches accurate to the second/frame. Using FAME’s new Video Exploitation Processor (VEP), all video is time-stamped, providing a universal reference for annotation, metadata correlation and searching.
In addition, the VEP includes utilities to analyze and correct stream-based metadata, ensuring that all streams include minimum metadata sets encoded according to MISB standards.
User and group security are maintained in FAME 3.0 by using Lightweight Directory Access Protocol (LDAP) synchronization with Active Directory. LDAP is an Internet protocol that indexes all the data in their entries and allows users to apply “filters” to find a specific person or group; Active Directory is a Microsoft technology that stores information and settings in a central database and allows users to assign policies, deploy software, and apply critical updates to an organization. With FAME 3.0, administrators can further delineate user access rights to the stream level.
If a user or group has access to a stream, then they also have access to the data created for that stream. Furthermore, an administration can set up access to areas of the system; for example, one user/group may have rights to see and edit data, but not create subclips from the original file. Rights can also be set in folders, allowing access to certain documents but not to others.
Virtual FAME 3.0 is deployed on virtual machines, greatly simplifying system deployment and administration. Virtualization allows FAME to run multiple virtual machines on a single physical machine, with each virtual machine sharing the resources of that one physical computer across multiple environments. Different virtual machines can run different operating systems and multiple applications on the same physical computer. This integrates the components that comprise the FAME architecture, allowing for advanced failover and redundancy techniques, reducing the hardware footprint and complexity involved with scaling FAME systems to many hundreds of users or streams, providing load balancing services, and increasing network efficiency by constraining much of the network traffic to a single physical server.
FAME 3.0 also has expanded the concept of markers. Markers are tags in the system at the video frame level, providing an integration point for external systems that produce relevant metadata with temporal or geospatial references. An analyst can view the incoming stream and add markers in real time, while a system that is collecting or generating other intelligence data can use the FAME marker interface to simultaneously annotate the motion imagery. This creates a multidimensional data layer that greatly enhances its utility to other users. In addition, all viewers can see the metadata as they are being created and interact with the metadata through the FAME interface.
Conflicts and activities that span theaters and other boundaries are accommodated within the architecture; an event that happens in one region of the world can have ties to an event happening in another region of the world. Events have no boundaries, and FAME 3.0 accommodates this. Video and metadata created and cataloged in one region are immediately available to all regions, allowing soldiers, analysts and decision makers to view files, dissect data, and collaborate with one another about the events happening in real time across the entire network. By federating data across the enterprise, users can browse or search for data no matter where the data originated.
The unique display needs of operations centers are addressed by FAME 3.0’s new geospatially enabled multiviewer. This provides a mapping context to a data wall, allowing multiple, incoming motion imagery streams to be displayed in the context of a Common Operating Picture. The large-format display is dynamically created using rules and preferences, such that certain activities can trigger a change that highlights a particular stream or group of streams. For example, a user can view all of the video in a Region of Interest superimposed on a Google Earth background. As the user zooms in and out of the Google Earth client, the video playing on the map background scales with it. This provides a familiar and simple mechanism to navigate all of the feeds in a theater or region.
In short, FAME 3.0 processes full-motion video and organizes, processes and federates data about full-motion video in real time, making the data available to the right people (war fighters, analysts and decision makers) in the right format and turning information into a benefit to the ISR community.