[Paper Summary] DeepMind Researchers introduce Epistemic Neural Networks (ENNs) for Uncertainty Modeling in Deep Learning

Deep learning algorithms are widely used in numerous AI applications because of their flexibility and computational scalability, making them suitable for complex applications. However, most deep learning methods today neglect epistemic uncertainty related to knowledge which is crucial for safe and fair AI. A new DeepMind study has provided a way for quantifying epistemic uncertainty, along with new perspectives on existing methods, all to improve our statistical knowledge of deep learning.

Read More

Stanford Researchers put Deep Learning on a Data Diet

With the cost for deep learning model training on the rise, individual researchers and small organisations are settling for pre-trained models. Today, the likes of Google or Microsoft have budgets (read:millions of dollars) for training state of the art language models.  Meanwhile, efforts  are underway to make the whole paradigm of training less daunting for everyone. Researchers are actively exploring ways to maximise training efficiency to make models run faster and use less memory.

Read More

[Paper] MIT & Google Quantum Algorithm trains wide and deep Neural Networks

Quantum algorithms for training wide and classical neural networks have become one of the most promising research areas for quantum computer applications. While neural networks have achieved state-of-the-art results across many benchmark tasks, existing quantum neural networks have yet to clearly demonstrate quantum speedups for tasks involving classical datasets. Given deep learning’s ever-rising computational requirements, the use of quantum computers to efficiently train deep neural networks is a research field that could greatly benefit from further exploration.

Read More

Aspect based sentiment analysis on financial news data using classical machine learning algorithms

Sentiment analysis is a very popular technique in Natural Language Processing. We can see it applied to get the polarity of social network posts, movie reviews, or even books. However basic sentiment analysis can be limited, as we lack precision in the evoked subject. Let’s take the example of reviews for a computer: how do we know what is good/bad ? Is it the keyboard, the screen, the processor?

Read More

Introduction to Time Series Forecasting — Part 1 (Average and Smoothing Models)

Time Series is a unique field. It is a Science in itself. Experts quote ‘A good forecast is a blessing while a wrong forecast can prove to be dangerous’. This article aims to introduce the basic concepts of time series and briefly discusses the popular methods used to forecast time series data.

Read More

Reinforcement Learning vs Genetic Algorithm — AI for Simulations

I had to decide, on an optimization approach that would better suit the use case. I had Reinforcement Learning and Genetic Algorithm (the two roads among many) in mind, but then an epiphany… “Both are nature inspired AI approaches, how are the two different? And more importantly, in which scenarios, is one favoured over the other?” And thus today we will be dissecting parts of the thought process behind coming to a decision.

Read More

Oilfield lithology prediction from Drilling Data with Machine Learning

Oil drilling

In the oil and gas industry recently, we find quite a few applications of machine learning. I wrote another article last year about sonic log prediction, which surprisingly obtained so many responses, and this work spread very rapidly. Another ML application is lithology prediction. Lithology refers to the type of rock. Lithology is classified into, for instance, sandstone, claystone, marl, limestone, and dolomite.

Read More

[Paper Summary] Scientists have created a new Tool ‘Storywrangler’ that can explore billions of Social Media messages in order to Predict Future Conflicts and Turmoil

Scientists have recently invented an instrument to divulge deeper into the billions of posts made on Twitter since 2008. The new tool is capable of providing an unprecedented, minute-by-minute view of popularity. The research was carried out by a team at the University of Vermont. The team calls the instrument the Storywrangler. 

Read More

[Paper] Google AI introduces a pull-push Denoising Algorithm and Polyblur: A Deblurring Method that eliminates noise and blur in images

Despite the advances in imaging technology, image noise and restricted sharpness remain most critical factors for increasing the visual quality of an image. Noise can be linked to the particle nature of light, or electronic components may be introduced during the read-out process. A photographic editing process will then process the captured noisy signal via the camera image processor (ISP) and be enhanced, amplified, and distorted. Image blur may be caused by a wide range of phenomena, from inadvertent camera shake during capture, incorrect camera focusing, or sensor resolution due to the final lens aperture.

Read More

Decision Trees vs. Random Forests in Machine Learning

As we all know, Machine learning is the subset of Artificial Intelligence that allows the user to train the model and to predict the data for business consumption. We train the machine with a training dataset; the data does not always have to be labeled one i.e. not with specified features. This is when the unsupervised ML model gets into the picture. They work in the unlabeled data. While the supervised works on labeled data, the unsupervised work on the unlabeled data.

Read More

Human Brain neuron and Artificial neuron

As you might be already aware, the human brain is made up of billions of neurons and an incredible number of connections between them. Each neuron is connected to multiple other neurons, and they repeatedly exchange information. So whatever activity that we do physically or mentally fires up a certain set of neurons in our brains.

Read More

The abolition of the data science cave

We’ve heard about explainability when it comes to understanding the decisions made by our data science models. But what about explainability during the development process? As data scientists we often work in interdisciplinary teams, where not everyone may be as familiar with our specialty, just as we are not familiar with theirs. These environments allow us to build the best possible…

Read More

[Paper] OpenAI reveals details about its Deep Learning Model ‘Codex’: The backbone of Github’s ‘Copilot’ Tool

In their latest paper, researchers at OpenAI reveal details about a deep learning model called Codex. This technology is the backbone of Copilot, an AI pair programmer tool jointly developed by GitHub and OpenAI that’s currently available in beta to select users. The paper explores the process of repurposing their flagship language model GPT-3 to create Codex, as well as how far you can trust deep learning in programming. The OpenAI scientists managed this feat by using a series of new approaches that proved successful with previous models.

Read More

8 dimensionality Reduction Techniques every Data Scientists should know

Dimensionality Reduction is the process of transforming a higher-dimensional dataset to a comparable lower-dimensional space. A real-world dataset often has a lot of redundant features. Dimensionality reduction techniques can be used to get rid of such redundant features or convert the n-dimensional datasets to 2 or 3 dimensions for visualization.

Read More

Deep reinforcement learning helps us master complexity

Deep reinforcement learning—where machines learn by testing the consequences of their actions—is one of the most promising and impactful areas of artificial intelligence. It combines deep neural networks with reinforcement learning, which together can be trained to achieve goals over many steps. It’s a crucial part of self-driving vehicles and industrial robots, which have to navigate complex environments safely and on time.

Read More

Sentiment & Engagement Analysis from your Slack data

A sneak peek into your Slack space’s emotions. Ever wondered how engaging was the content you delivered? Was it clear or confusing? Or if people misunderstood your message at that company-wide meeting? Remote environments give very little chance for teachers and leaders to gain feedback and optimize their content towards better performance.

Read More
1 2 3 20