[Paper] Google AI introduces a pull-push Denoising Algorithm and Polyblur: A Deblurring Method that eliminates noise and blur in images

Despite the advances in imaging technology, image noise and restricted sharpness remain most critical factors for increasing the visual quality of an image. Noise can be linked to the particle nature of light, or electronic components may be introduced during the read-out process. A photographic editing process will then process the captured noisy signal via the camera image processor (ISP) and be enhanced, amplified, and distorted. Image blur may be caused by a wide range of phenomena, from inadvertent camera shake during capture, incorrect camera focusing, or sensor resolution due to the final lens aperture.

Read More

VOCHI raises additional $2.4 million for its computer vision-powered video editing app

Vochi

VOCHI, a Belarus-based startup behind a clever computer vision-based video editing app used by online creators, has raised an additional $2.4 million in a “late-seed” round that follows the company’s initial $1.5 million round led by Ukraine-based Genesis Investments last year. The new funds follow a period of significant growth for the mobile tool, which is now used by more than 500,000 people per month and has achieved a $4 million-plus annual run rate in a year’s time.

Read More

NVIDIA Launches TensorRT 8 that improves AI Inference Performance making Conversational AI smarter and more interactive from Cloud to Edge

Tensor 8

Today, NVIDIA released the eighth generation of the company’s AI software: TensorRT™ 8, which cuts inference time for language queries in half. This latest version of the software allows firms to deliver conversational AI applications with quality and responsiveness that was never possible before.  

Read More

Duke Energy used computer vision and robots to cut costs by $74M [with Video]

Duke Energy

Duke Energy’s AI journey began because the utility company had a business problem to solve,  Duke Energy chief information officer Bonnie Titone told VentureBeat’s head of AI content strategy Hari Sivaraman at the Transform 2021 virtual conference on Thursday.

Read More

[Paper Summary] DeepMind introduces it’s Supermodel AI ‘Perceiver’: a Neural Network Model that could process all types of input

DeepMind recently released a state-of-the-art deep learning model called Perceiver via a recent paper. It adapts the Transformer to let it consume all the types of input ranging from audio to images and perform different tasks, such as image recognition, for which particular kinds of neural networks are generally developed. It works very similarly to how the human brain perceives multi-modal input.

Read More

[Paper Summary] Researchers at Facebook AI, UC Berkeley, and Carnegie Mellon University Announced Rapid Motor Adaptation (RMA), An Artificial Intelligence (AI) Technique

To achieve success in the real world, walking robots must adapt to whatever surfaces they encounter, objects they carry, and conditions they are in, even if they’ve not been exposed to those conditions before. Moreover, to avoid falling and suffering damage, these adjustments must happen in fractions of a second.

Read More

Computer Vision software startup Algolux brings in $18.4M in Series B Funding

Algolux

Algolux, a computer vision startup that builds software for advanced driver assistance systems (ADAS) and for autonomous vehicles, has secured $18.4 million in new Series B funding from a group of investors that includes General Motors’ investment division, GM Ventures.
The new funding, which raises the Montreal, Canada-based company’s total funding to $36.8 million so far, was co-led by investors Forte Ventures and Drive Capital. Other investors include Investissement Quebec, Castor Ventures, Nikon-SBI Innovation Fund, GM Ventures, Generation Ventures and Intact Ventures.
The fresh influx of cash will be used by Algolux to help promote and grow the company’s computer vision and image optimization technologies with vehicle makers so they can use them with their future vehicles, according to the company. Algolux will also use the money to expand its engineering and marketing teams, while also exploring additional vertical markets for its technologies. The latest funding round was announced by Algolux on July 12 (Monday).
The company’s computer vision software is used with in-vehicle cameras as part of ADAS and autonomous vehicles in a market that is continuing to grow in use and popularity.
Image courtesy: Algolux
“Unfortunately, vision – the most widely deployed component of the overall perception stack – is still hampered by performance issues in low light and poor weather conditions making SAE Levels 2 and above more challenging to support,” the company said in its press release.
To battle this problem, Algolux uses computational imaging to design algorithms that treat the camera as part of the overall perception stack, which is a departure from the traditional siloed approach, according to the company. This approach resolves problems such as low light, low contrast and obstructions for object detection, imaging and geometric estimation, which provides clearer images and resolution. The use of the physical camera models also reduces training data needs by an order of magnitude, resulting in Algolux technologies outperforming commercial solutions by as much as 60 points in mean average precision (mAP), according to the company.
“We are thrilled to be taking this next step in the company’s trajectory and to do so with the trust and support of outstanding investors,” Allan Benchetrit, the CEO of Algolux, said in a statement. “Algolux is actively engaged with leading OEMs, Tier 1s, and Tier 2s globally. The consistent theme is a desire from customers to significantly improve the performance of their driving and parking vision systems in even the most challenging real-world situations.”
Shelly Kramer, analyst
Shelly Kramer, a founding partner and lead analyst with Futurum Research, told EnterpriseAI that Algolux’s latest funding news is an indicator of just how important computer vision is and how it will continue to move forward in the automotive sector.
“The fact that camera-based advanced driver assistance systems are table stakes when it comes to driving experiences today – both driver-led and autonomous – combined with the fact that camera tech still has a long way to go in terms of functionality and accuracy, means this is good news for the industry,” said Kramer. “Algolux’s computational imaging as part of the algorithm design process bodes well for all those days when my car’s camera tells me it can’t see because of weather conditions — and for the computer vision industry and the automotive industries. This is especially good news for the trucking industry and autonomous vehicles. This is an industry, and a company, to watch.”
James Kobielus, senior research director for data communications and management at TDWI, a data analytics consultancy, said the computer vision market today is “extraordinarily overcrowded” and that its use for automotive safety still has a long way to go before it is ready for primetime deployment.
James Kobielus, analyst
“I am impressed with Algolux’s focus on AI-powered cameras for robust perception in all conditions,” said Kobielus. “It approaches visual imaging as an integral, but not self-sufficient, component of the automotive perception stack. Without supplementary sensing inputs–such as radar, LiDAR, infrared, and ultrasound—and the composite AI to tie it all together in real time, automotive computer vision systems are extremely prone to mistakes from ever-present visual phenomena, such as low lighting, low contrast, and obstructed sightlines.”
The larger trend in the marketplace is the deployment of AI-driven perception stacks in which computer vision is essentially the sum of all sensor inputs that can be rendered as visual patterns, said Kobielus.
“Through sophisticated AI, it is increasingly possible to infer a highly accurate visual portrait from the radio frequency signals that people and objects reflect, the pressure and vibrations they generate, and the heat patterns that they radiate,” he said. “Algolux will need the funding to invest in the R&D necessary to improve its composite AI and to work with industry partners to build it into the ASICs necessary for ADAS safety applications.”

Related

Tags:
ADAS,AI,Algolux,artificial intelligence,autonomous driving,autonomous vehicles,computer vision,GM Ventures,IT investment,Machine Learning,self-driving cars,startup,venture capital

Read More

[Paper Summary] Cornell and Harvard University Researchers develops Correlation Convolutional Neural Networks (CCNN): To determine which Correlations are most important

team of researchers from Cornell and Harvard University introduces a novel approach to parse quantum matter and make crucial data distinctions. This proposed technique will enable researchers to decipher the most perplexing phenomena in the subatomic realm.

Read More

IBM Open Sources ‘CodeFlare’, a Machine Learning Framework that simplifies AI Workflows onto the Hybrid Cloud

Wireframe diagram

IBM has open-sourced CodeFlare, a machine learning framework that will allow developers to train their models more efficiently onto the hybrid cloud. This new framework is an exciting concept for those who are looking to simplify their workflow and shorten the time it takes. The idea behind this design is that when users have 10,000 work pipelines running, they wait up to 4 hours before receiving a result. While using this new framework, its implementation into these machines will require only 15 minutes.

Read More

[Paper Summary] A novel Caltech Algorithm allows Autonomous Systems navigate themselves by referring the Surrounding Terrain, summer or winter

Car and Drone

The process employed in the algorithm is called ‘visual terrain-relative navigation’ (VTRN), which was first developed in the 1960s that helped the autonomous devices compare the nearby terrain to high-resolution satellite images to locate themselves.

Read More

Tesla AI chief explains why self-driving cars don’t need lidar

Tesla

What is the technology stack you need to create fully autonomous vehicles? Companies and researchers are divided on the answer to that question. Approaches to autonomous driving range from just cameras and computer vision to a combination of computer vision and advanced sensors. Tesla has been a vocal champion for the pure vision-based approach to autonomous driving, and in this year’s Conference on Computer Vision and Pattern Recognition (CVPR), its chief AI scientist Andrej Karpathy explained why.

Read More

Importance of Data Annotation for Machine Learning

The word data annotation or data labeling comes when someone is talking about implementing an AI or ML project. So what is machine learning or artificial intelligence? The basic premise of machine learning is that computer systems and programs can become able to improve their outputs in ways that resemble human cognitive processes, without direct human help or intervention, to give us insights. In other words, they become self-learning machines that, much like a human, become better at their job with more practice.

Read More

EBRAINS Researchers introduce a Robot whose internal workings Mimic a Human Brain (with Video)

Ebrains

The human brain contains between 100 million and 100 billion neurons that process information from the senses and body and send messages back to the body. Thus, human intelligence is one of the most intriguing concepts many AI scientists are looking to replicate. A team of researchers at the new EBRAINS research infrastructure are building robots whose internal workings mimic the brain that would bring new concepts on the neural mechanisms.

Read More

A single Number Metric for evaluating Object Detection Models

Evaluating an object detection model using precision and recall can provide valuable insight to how the model is performing at various confidence values. Similarly, the F1 score is especially helpful in determining the optimum confidence that balances the precision and recall values for that given model; however, the value spans over a domain of confidence values from 0 to 1. A single value evaluation metric can be derived from the set of F1 scores for a given model that may be a good indicator of the overall model performance.

Read More

Google AI introduces MIAP (More Inclusive Annotations for People) Dataset in the Open Images Extended Collection for Computer Vision Research

Obtaining datasets that include thorough labeling of sensitive attributes is difficult, especially in the domain of computer vision. Recently, Google has introduced the More Inclusive Annotations for People (MIAP) dataset in their Open Images Extended collection.

Read More

Postal Service turns to computer vision AI, edge computing to improve delivery

US Post Office

Announced this month, USPS has deployed advanced computer systems at 195 mail processing centers nationwide to apply seven computer vision models, cutting the time it takes to track a missing package from several days to less than two hours. Additionally, a computer vision task that would have taken two weeks on servers with 800 CPUs can be done in 20 minutes on four NVIDIA V100 Tensor Core GPUs in one Hewlett-Packard Enterprise Apollo 6500 server.

Read More
1 2 3 6