Understanding what dropout layers are and what their contribution is towards improving the efficiency of a neural network. Through this, we see that dropout improves the performance of neural networks on supervised learning tasks in speech recognition, document classification and vision.
Read MoreTag: Neural Networks
Using Reinforcement Learning to build a Self-Learning grasping Robot
In this post, I will explain my experience over the course of a year of working with Reinforcement Learning (RL) on autonomous robotics manipulation. It is always hard to start a big project which requires many moving parts. It was undoubtedly the same in this project. I want to pass the knowledge I gathered through this process to help others overcome the initial inertia.
Read MoreDeepMind’s big losses, and the questions around running an AI lab
Last week, on the heels of DeepMind’s breakthrough in using AI to predict protein-folding came the news that the UK-based AI company is still costing its parent company Alphabet Inc hundreds of millions of dollars in losses each year. A tech company losing money is nothing new. The tech industry is replete with examples of companies who burned investor money long before becoming profitable. But DeepMind is not a normal company seeking to grab a share of a specific market. It is an AI research lab that has had to repurpose itself into a semi-commercial outfit to ensure its survival.
Read MoreThe inevitable symbiosis of Cybersecurity and AI
While improvements in AI and Deep Learning move forward at an ever increasingly rapid rate, people have started to ask questions. Questions about jobs being made obsolete, questions about the inherent biases programmed into the neural networks, questions about whether or not AI will eventually consider humans as dead-weight and unnecessary to achieve the goals they’ve been tasked programmed with.
Read MoreHoney I shrunk the Model: Why big Machine Learning models must go small
Bigger is not always better for machine learning. Yet, deep learning models and the datasets on which they’re trained keep expanding, as researchers race to outdo one another while chasing state-of-the-art benchmarks. However groundbreaking they are, the consequences of bigger models are severe for both budgets and the environment alike.
Read MoreNo, but actually, how does a Neural NetWORK?- Part 2
The second part to the quick and fluff-free introduction to neural networks in machine learning, and how to build your first neural network model! 😄
Read MoreNo, but actually, how does a Neural NetWORK?- Part 1
A quick and fluff-free introduction to neural networks in machine learning, and how to build your first neural network model! 😄
Read MoreEssential Guide to Transformer Models in Machine Learning
Transformer models have become the defacto standard for NLP tasks. As an example, I’m sure you’ve already seen the awesome GPT3 Transformer demos and articles detailing how much time and money it took to train.
Read MoreBigger is not always better: exploiting parallelism in NN hardware
When considering hardware platforms for executing high performance NNs (Neural Networks), automotive system designers frequently determine the total compute power by simply adding up each NN’s requirements. However, the approach usually leads to demands for a single large NN accelerator.
Read MoreAre Neural Networks making us lazier?
Sometimes I feel like human progress is measured on how much lazier we can get.
Read MoreSpeeding up Deep Learning inference via unstructured sparsity
Serving large neural networks can be expensive. It certainly doesn’t help that neural network size appears to correlate with how useful they are.
Read MoreFrom von Neumann to Memory-Augmented Neural Networks
The traditional von Neumann architecture differentiates between a CPU (Central Processing Unit) and three levels of memory: registers — very fast, but with storage capability limited to a few values; main memory (e.g. RAM)— faster, with enough storage to accommodate for instructions and data to run a program, and external memory (e.g. hard drive) — slow, but with room for virtually all data used by a computer.
Read MoreHow Artificial Neural Networks works in Deep Learning
Hey Everyone , back with another blog related to deep learning , In this blog we are going to learn core concept behind Deep Learning i.e Neural Networks. In this blog we are going to covers some basics concept under Neural Networks.
Read MoreMulticlass Classification and Information Bottleneck — An example using Keras
Initially, we will train our models for 20 epochs in mini-batches of 512 samples. We will also pass our validation set to the fit method.
Read MoreResiliency of power grid being Boosted with Help of AI
Long-time US government investments to build a “smart grid” for delivering electric power are employing AI techniques to improve resiliency.
Read MoreWhy we need bias in Neural Networks
The term “bias” has a lot of pejorative connotations. However, words have many meanings depending on the context, and surprisingly even bias can be something helpful. Machine Learning is a domain where we can meet bias in a couple of contexts.
Read MoreTraining Neural Networks for price prediction with TensorFlow
Using Deep Neural Networks for regression problems might seem like overkill, but for some cases of high dimensional data, they can outperform any other ML models. In this guide, I listed some key tips and tricks learned while using DNN for regression problems.
Read MoreSimultaneous Continuous/ Discrete Hyperparameter tuning with Policy Gradients
We demonstrate an efficient method for simultaneously tuning discrete and continuous hyperparameters for machine learning models using policy gradients.
Read MoreHow do Deep Neural Networks work?
Every day we are facing AI and neural network in some ways: from common phone use through face detection, speech or image recognition to more sophisticated — self-driving cars, gene-disease predictions, etc. We think it is time to finally sort out what AI consists of, what neural network is and how it works.
Read MoreSo you want to study Machine Learning and Civil Engineering?
The application of ML to Civil Engineering began in the 1980s when ML techniques were applied for knowledge extraction from Civil Engineering (CIE) data. The field of civil engineering is rife with the problem of uncertainties in areas not limited to construction management, safety, design, and decision making; the solution to these problems depends on calculations and experience of practitioners.
Read More