[Paper Summary] DeepMind Researchers introduce Epistemic Neural Networks (ENNs) for Uncertainty Modeling in Deep Learning

Deep learning algorithms are widely used in numerous AI applications because of their flexibility and computational scalability, making them suitable for complex applications. However, most deep learning methods today neglect epistemic uncertainty related to knowledge which is crucial for safe and fair AI. A new DeepMind study has provided a way for quantifying epistemic uncertainty, along with new perspectives on existing methods, all to improve our statistical knowledge of deep learning.

Read More

[Paper] MIT & Google Quantum Algorithm trains wide and deep Neural Networks

Quantum algorithms for training wide and classical neural networks have become one of the most promising research areas for quantum computer applications. While neural networks have achieved state-of-the-art results across many benchmark tasks, existing quantum neural networks have yet to clearly demonstrate quantum speedups for tasks involving classical datasets. Given deep learning’s ever-rising computational requirements, the use of quantum computers to efficiently train deep neural networks is a research field that could greatly benefit from further exploration.

Read More

Qualcomm stakes beachhead in Artificial Intelligence with Foxconn Gloria AI Edge Box

Qualcom cloud AI 100

When most folks think of Qualcomm, the first technologies that likely come to mind are the company’s industry-leading mobile platform system-on-chips for smartphones, as well as the company’s end-to-end 5G connectivity solutions. However, whether you consider applications like image recognition, speech input, natural language translation or recommendation engines, modern smartphone platforms typically require a lot of artificial intelligence (AI) processing horsepower as well.

Read More

Intel aims to regain chip manufacturing leadership by 2025

Intel Innovations

Under new management, Intel aims to recapture a crown that it owned for decades and regain technology leadership in manufacturing chips by 2025. This will be challenging, as the company has to invest tens of billions of dollars and get its technology right in the wake of numerous missteps, but new CEO Pat Gelsinger said at an event that the big chipmaker is accelerating its investments in manufacturing processes and packaging innovations.

Read More

[Paper Summary] Scientists have created a new Tool ‘Storywrangler’ that can explore billions of Social Media messages in order to Predict Future Conflicts and Turmoil

Scientists have recently invented an instrument to divulge deeper into the billions of posts made on Twitter since 2008. The new tool is capable of providing an unprecedented, minute-by-minute view of popularity. The research was carried out by a team at the University of Vermont. The team calls the instrument the Storywrangler. 

Read More

My two EdTech adventures

Child at a computer

I have been thinking a little about the impact of the digital technologies on education, it has been significant and with the advent pandemic ubiquitous. I am interested in NLProc (Natural Language Processing) and have been pondering it’s applications in pedagogy and education a little. These brought back some memories of what can loosely be considered my Edtech Adventures.

Read More

[Paper] Google AI introduces a pull-push Denoising Algorithm and Polyblur: A Deblurring Method that eliminates noise and blur in images

Despite the advances in imaging technology, image noise and restricted sharpness remain most critical factors for increasing the visual quality of an image. Noise can be linked to the particle nature of light, or electronic components may be introduced during the read-out process. A photographic editing process will then process the captured noisy signal via the camera image processor (ISP) and be enhanced, amplified, and distorted. Image blur may be caused by a wide range of phenomena, from inadvertent camera shake during capture, incorrect camera focusing, or sensor resolution due to the final lens aperture.

Read More

[Paper] OpenAI reveals details about its Deep Learning Model ‘Codex’: The backbone of Github’s ‘Copilot’ Tool

In their latest paper, researchers at OpenAI reveal details about a deep learning model called Codex. This technology is the backbone of Copilot, an AI pair programmer tool jointly developed by GitHub and OpenAI that’s currently available in beta to select users. The paper explores the process of repurposing their flagship language model GPT-3 to create Codex, as well as how far you can trust deep learning in programming. The OpenAI scientists managed this feat by using a series of new approaches that proved successful with previous models.

Read More

Bias in AI isn’t an enterprise priority, but it should be, survey warns

AI bias survey

A global survey published today finds nearly a third (31%) of respondents consider the social impact of bias in models to be AI’s biggest challenge. This is followed by concerns about the impact AI is likely to have on data privacy (21%). More troubling, only 10% of respondents said their organization has addressed bias in AI, with another 30% planning to do so sometime in the next 12 months.

Read More

V is for Data

V is for data

Data science, machine learning, and all forms of artificial intelligence have data at their core. We often focus the bulk of our attention on formulas or code when it comes to these disciplines and that makes sense for researchers in those areas of knowledge. But most professionals and hobbyists alike are practitioners of data science and machine learning instead of researchers.

Read More

[Paper Summary] Facebook AI Introduces few-shot NAS (Neural Architecture Search)

Neural Architecture Search (NAS) has recently become an interesting area of deep learning research, offering promising results. One such approach, Vanilla NAS, uses search techniques to explore the search space and evaluate new architectures by training them from scratch. However, this may require thousands of GPU hours, leading to a very high computing cost for many research applications.

Read More

What is so impressive with Google and DeepMind’s AlphaFold Technology

Cell connectors

Google-owned DeepMind has pushed the limits of Artificial intelligence. One of the first introductions most people had to DeepMind was through AlphaZero [1]. “AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi (Japanese chess) as well as Go, and convincingly defeated a world-champion program in each case.” AlphaZero achieved all of this through a process called reinforcement learning, basically playing repeated games against itself until it identified winning strategies.

Read More

The extreme costs of Faulty AI and the vital role Humans play

Facial recognition

Nowadays, more and more companies routinely tap into AI as a solution or catalyst for next-generation offerings with huge real-world impacts. With such ambitious undertakings, mistakes go beyond mere inconvenience. AI failures can lead to real-life harm–physically, psychologically, and emotionally. The potential financial risk to a company’s bottom line could put a significant dent in the reward.

Read More

The user-experience crisis in bioinformatics & artificial intelligence softwares

Cell structure

Biology was once a domain where we could use only sight to make distinct differences between vegetal, animal & bacteria cells. During these times, it was possible for a single person to study a range variety of organisms and to still discover new mechanisms. However, nowadays you cannot simply be a biologist anymore.

Read More

NVIDIA Launches TensorRT 8 that improves AI Inference Performance making Conversational AI smarter and more interactive from Cloud to Edge

Tensor 8

Today, NVIDIA released the eighth generation of the company’s AI software: TensorRT™ 8, which cuts inference time for language queries in half. This latest version of the software allows firms to deliver conversational AI applications with quality and responsiveness that was never possible before.  

Read More

How to use image preprocessing to improve the accuracy of Tesseract

Previously, on How to get started with Tesseract, I gave you a practical quick-start tutorial on Tesseract using Python. It is a pretty simple overview, but it should help you get started with Tesseract and clear some hurdles I faced when I was in your shoes. Now, I’m keen on showing you a few more tricks and stuff you can do with Tesseract and OpenCV to improve your overall accuracy.

Read More

[Paper Summary] ‘QuanTaichi’ Quantized Simulation: High Visual Quality With Reduced Memory Cost

High-resolution simulations can provide the great visual quality demanded by today’s advanced computer graphics applications. However, as simulations scale up, they require increasingly costly memory to store physical states, which can be problematic, especially when running on GPUs with hard memory space limits.

Read More

Edge Impulse Combines AutoML and TinyML to make AI ubiquitous

Impulse Edge

Edge Impulse combines two popular techniques to bring AI to microcontrollers – AutoML and TinyML. AutoML focuses on two critical aspects of machine learning – Data acquisition and prediction. The AutoML platform will abstract all the steps that take place in between these two phases. Essentially, developers bring their own dataset, identify the labels, and push a button to generate a thoroughly trained model that’s ready to predict.

Read More
1 2 3 18