MILPITAS, Calif., Dec. 10, 2019 /PRNewswire/ -- Gyrfalcon Technology Inc. (GTI) sees interest surging around MPEG7's latest standards that usher in a wave of capabilities leveraging AI, but more importantly, solving issues for AI that promise to accelerate deployment, reduce latency, lower cost and alleviate compatibility. It is timely that such standards are emerging, as machines are becoming the largest and also the fastest segment of video user, to accommodate the rapid growth in IoT and AI.
Mobilocity, an analyst firm, published a paper "Solving the Machine-to-Machine (M2M) Transmission & Search Bottleneck" http://bit.ly/2t1umdL highlighting the need for such a standard, and the anticipated benefits and market impact related to the standard's addressing current issues.
With CDVA, approved as a part of the standard in July of 2019, a number of significant benefits become immediately clear. First, a range of new capabilities and service delivery models that incorporate the edge and unburden the value chain for automated intelligence. Second, the industry suddenly has the ability to produce the "words" that deliver a universal vocabulary in a "language for machines" that can be compatible between the devices and the processors from different providers across ecosystems.
The New Capabilities, Starting with "Shoot & Search"
CDVA enables images and video to be encoded while being captured, which yields the opportunity to unburden the network and storage resources for dramatic benefits. It supports machine-only as well as for hybrid (machine and human) formats to support various service models and a broad range of applications.
Machine only encoding needs only basic and inexpensive camera sensors to extract features into feature maps that machines can use without all the high resolution humans require, and therefore capture, use and store with much smaller file sizes. Sizes could be as much as 1,000 times smaller, reducing the impact on networks and storage, and thereby reducing latency and energy use.
Hybrid encoding would allow people to use the video, and at the same time would be embedded with the information needed for machines to share. This Hybrid encoding embeds the metadata automatically using AI that makes image and video files searchable with greater speed and precision than currently possible. One way to think about this Hybrid encoding is "closed captioning" for machines, where only machines would detect and use the encoded features, providing optimized understanding efficiency for them, without impacting the human consumers of those data files.
CDVA opens the door for exciting new capabilities, such as allowing accelerated and precise searches of content "on the device" or on "home servers", where users want to keep personal data for security and privacy. Consumers lack a means of easily searching the tens of thousands of image and video files stored on computers, phones and electronics, and CDVA will allow users to open their camera, gallery or browser and "Shoot" an image and recall the most closely matching files for their "Search". They can use this same capability to more quickly and precisely search the libraries for service providers providing entertainment and education content complying with the new MPEG7 standards.
Producing the "Words" for the Emerging Machine "Language"
As a standard, CDVA would extract features from images using a local AI processor equipped camera sensor, and produce a feature map to identify the object, activity, location, etc. These feature maps are defined by the standard, so they can be shared between devices and processors provided by different manufacturers. VCM, on the MPEG roadmap, will provide creative ways to aggregate feature maps together to provide "machine understanding" of images and videos, much like using different words in the right sequences allows people to communicate with languages. Certainly, the ability for developers to share the algorithms stemming a shared global standard like CDVA and VCM addresses what has been a development challenge for new products and services integrating AI.
Many markets will benefit from the adoption of VCM, such as Smart Home, Smart City, Autonomous Vehicle and Intelligent Transportation. Devices can include robots, drones, autonomous vehicles, traffic cameras, surveillance cameras, and all manner of camera equipped appliances and equipment. All of the captured video will be more usable, due to the embedding of indexed features on the frames of the video. Much of the video will only be needed for machines, allowing the sensors to be basic and lower cost, use less energy as well as to result in files of very small size. This will reduce network congestion, by sending smaller files, and lower demand for storage of archived video.
"This is what we are seeing in working with the new standard, and proving out the effectiveness of CDVA and VCM using AI processors and camera sensors," said Dr. Menouchehr Rafie, Vice President of Advanced Technologies at Gyrfalcon Technology Inc. "Transporting compressed extracted feature vectors rather than compressed raw textual data will drastically reduce the data amount for video transmission or storage and achieve interoperability between various applications and devices particularly in the emerging 5G IoT and V2X standards."
White papers authored by Dr. Rafie can be found here: https://www.gyrfalcontech.ai/automotive-whitepaper/ and https://www.gyrfalcontech.ai/intelligent-transportation-whitepaper/
Sign Up for Access to VCM Archives: https://lists.aau.at/mailman/listinfo/mpeg-vcm
See Real CDVA & VCM Demos @ CES2020 in Las Vegas: January 7-9, 2020
GTI will be showing real demos of applications using the standards and operating on production grade processors being used by customers, along with demos of its own edge and data center processors used in the commercial products of partners and customers. Interested parties can request a private showing in the company suite here: https://www.gyrfalcontech.ai/ces-meeting-request.
About Gyrfalcon Technology Inc.
Gyrfalcon Technology Inc. (GTI) is the world's leading developer of high performance AI Accelerators that use low power, packaged in low-cost and small sized chips. Founded by veteran Silicon Valley entrepreneurs and Artificial Intelligence scientists, GTI drives adoption of AI by bringing the power of cloud Artificial Intelligence to local devices, and improves Cloud AI performance with greater performance and efficiency, providing the utmost in AI customization for new equipment and a path to AI upgrade to customers. For more information on GTI, visit https://www.gyrfalcontech.ai/.
Media Contact
Kristin Taylor
Gyrfalcon Technology, Inc. (GTI)
+1.415.310.3390
[email protected]
SOURCE Gyrfalcon Technology Inc.
Related Links
WANT YOUR COMPANY'S NEWS FEATURED ON PRNEWSWIRE.COM?
Newsrooms &
Influencers
Digital Media
Outlets
Journalists
Opted In
Share this article