Google AI, DeepMind and the University of Toronto introduce DreamerV2

dreamer-v2

It is the first Reinforcement Learning (RL) agent based on the world model to attain human-level success on the Atari benchmark. It includes the second generation of the Dreamer agent who learns behaviors entirely within a world model’s latent space […]

Read More

IBM Develops AI Chip with remarkable energy efficiency

Digital AI cores

As the demand for energy-efficient, sustainable and smart technology is rapidly increasing, IBM has developed a new technology, considered the world’s first energy-efficient chips for AI inference and training. The chip is built with 7-nanometer technology. A team of researchers proposed a hardware accelerator that supports a range of model types that can achieve leading power efficiency on […]

Read More

Georgia Tech and Facebook AI researchers devise a new Tensor…

A recent study conducted jointly by the Georgia Institute and Facebook AI researchers has opened the door to a new method called TT-Rec (Tensor-Train for DLRM). If employed successfully, this method would be a leap forward in the arena of deep learning as it will significantly reduce the size of the Deep Learning Recommendation Models […]

Read More

Biotech company combines Single Cell Genomics with Machine Learning

Immunai

Immunai is a biotech company using machine learning algorithms that combine single-cell genomics to empower the human immune system’s high-resolution profiling. Based out of New York, this company was established merely three years ago, but it is growing at a breakneck pace with the largest dataset in the world for single-cell immunity characteristics.

Read More

Allen Institute For AI (AI2) Launches The 2.7.0 Version of AI2-THOR That Enables Users To Reduce Their Training Time Dramatically

AI2-THOR

Allen Institute for AI (AI2) has recently announced the 2.7.0 release of AI2-THOR. AI2-THOR is an open-source interactive environment for training and testing embodied AI. The 2.7.0 version of AI2-THOR contains several performance enhancements that can provide dramatic training time reductions. The new version introduces improvements to the IPC system between Unity/Python and serialization/deserialization format. It includes new actions that are much better to control the metadata. 

Read More

Introduction to Sentiment Analysis

In this article, we’ll go through the process of building a sentiment analysis model using Python. Specifically, we’ll create a bag-of-words model using an SVM. By interpreting this model, we also understand how it works. Along the way, you will learn the basics of text processing. We’ll go over key pieces of code and you can find the full project on GitHub. Before we dive into all of that, let’s start by explaining what sentiment analysis is.

Read More

3 ways Leaders fail their AI Projects

Girl with two eggs

Why do so many AI Projects fail, and how can Leaders avoid this? How do most organizations begin their Artificial Intelligence (AI) journey? Let’s look at how leaders of some large enterprises planned their foray into AI. Here are a couple of recent examples from McKinsey:

How do most organizations begin their Artificial Intelligence (AI) journey?
Let’s look at how leaders of some large enterprises planned their foray into AI. Here are a couple of recent examples from McKinsey:
The leader of a large organization spent two years and hundreds of millions of dollars on a company-wide data-cleansing initiative. The intent was to have one data meta-model before starting any AI initiative.
The CEO of a large financial services firm hired 1,000 data scientists, each at an average cost of $250K, to unleash AI’s power.
And here’s an example that I witnessed first-hand.
The CEO of a large manufacturer lined up a series of ambitious projects that used unstructured data, since AI techniques are very effective with text, image, and video data.
What do all of these initiatives have in common? They all failed.

McKinsey’s State of AI survey found that only 22 percent of companies using AI reported a sizable bottom-line impact.

In addition to the massive sunk costs suffered by these projects, they led to the organization’s disillusionment with advanced analytics.
This is not uncommon. McKinsey’s State of AI survey found that only 22 percent of companies using AI reported a sizable bottom-line impact. Why do so many projects fail, and how can leaders avoid this?
Most leaders pursuing AI miss out on three areas of ownership. These responsibilities start well before you plan your AI projects, and they extend long after your projects go live.
Here are the three ways to fail your AI initiative:

Photo by 青 晨 on Unsplash
McKinsey found that only 30 percent of organizations aligned their AI strategy with the corporate strategy. Isn’t it shocking that a majority of leaders are burning their cash in the name of AI? Organizations often pursue AI initiatives that appear interesting or those that are just urgent.
True, your projects must address a business pain point. But, what’s more important is that these outcomes must align with your corporate strategy. Start with your business vision and identify how data will enable it. Clarify who your target stakeholders are and define what success will look like for them.

Organizations often pursue AI initiatives that appear interesting or those that are just urgent.

Then, identify the strategic initiatives that will empower the stakeholders and get them closer to their business goals. Now, you’re ready to brainstorm to come up with the long list of AI projects that are worth evaluating.
In a report by MIT Sloan Management Review, Steve Guise, the CIO of Roche Pharmaceuticals, explains how AI helps transform the company’s business model. Roche is working toward making personalized health care a reality. Guise points out that the current model of drug delivery will not help them achieve this vision. They see a need to accelerate the pace of drug discovery from three drugs per year to 30. Guise says that AI can help them get this exponential improvement.
Roche is making AI mainstream within the organization by building capabilities across screening, diagnosis, and treatment. It augments this by partnering with startups pursuing AI-driven drug discovery,. Thanks to these efforts, Roche has made significant breakthroughs in the treatment of diseases such as Hepatitis B and Parkinsons. By starting with their corporate vision and aligning all their AI initiatives with this overarching objective, Roche’s efforts are bearing fruits.

Photo by KS KYUNG on Unsplash
When should you think about Return on Investment (ROI) from your AI project? Most organizations make the mistake of tracking ROI when the project goes live. Leaders settle for fuzzy outcomes such as “efficiency improvement,” “brand value,” or “happier customers,” to make matters worse.
True, it’s not easy to quantify the dollar value of outcomes. But it’s not impossible. You must demand quantification of business benefits even before greenlighting a project. AI can deliver value by either growing revenue or lowering expenses. Both are valuable. Define which of these outcomes your project will enable.

Leaders make the mistake of settling for fuzzy outcomes.

Identify a mix of leading and lagging metrics that will help measure these outcomes. Collect the data needed to compute the metrics by updating your processes or creating new ones. Finally, track your investments by going beyond the hardware, software, and technical team costs. Include your spending on adoption and change management programs. This ROI metric should be a critical factor in your project approval decision.
Deutsche Bank rolled out its AI-driven consumer credit product in Germany. The solution made a real-time decision on the loan even as the customer filled out the loan application. Consumers were worried about loan denials impacting their credit ratings. This product removed that risk by telling them whether their loan would be approved, even before they hit “apply.”
Deutsche Bank found that loan issuance shot up by 10 to 15-fold in eight months after the AI-powered service was launched. The gains were achieved by bringing in customers who wouldn’t have applied in the first place. This was a clear case of AI helping grow revenue.

Photo by Tengyart on Unsplash
In its 2019 annual survey, Gartner asked Chief Data Officers about their biggest inhibitor to gaining value from analytics. The topmost challenge had nothing to do with data or technology. It was culture.
As Peter Drucker famously said, “Organizational culture eats strategy for breakfast.” Even the best-laid AI strategy will amount to nothing if you don’t carefully shape the organizational culture. A culture change must start at the top. Leaders must use storytelling to inspire and demonstrate how AI can help the organization achieve its vision.

Leaders must address the fear around AI and improve the data literacy of all employees.

Leaders must address the fear around AI and improve the data literacy of all employees. They must lead by example and sustain change by onboarding data champions across all levels. The cultural shift takes years, and leaders must influence it long after the projects have gone live.
Wonder what the main ingredient in a Domino’s Pizza is? It’s data! Dominos Pizza is the poster child of technology transformation. The organization lives the data-driven decision-making culture and uses AI across sales, customer experience, and delivery. This wasn’t the case 10 years ago.
Patrick Doyle took over as CEO of the 50-year old pizza maker in 2010 when it was panned by customers and investors alike. Doyle took the bold step of going public with the harvest reviews. He then did a full reboot inside-out and set the organization on the path of digital transformation. He placed some bold bets on technology by taking on risky projects, empowering people, and building several AI innovations in-house.
When Doyle retired in 2018, Dominos’ sales had increased for 28 quarters straight, and it delivered stock returns that outpaced Google’s. The outgoing CEO summed it up best, “We are a technology company that happens to sell pizza.” By leading a cultural transformation within Dominos, Doyle ensured a shift to data-driven decisions that has sustained even after he transitioned to a new CEO.

Photo by Craig Chelius from Wikimedia Commons
Adoption of technology innovation is never easy. Whether it’s the launch of new technology such as AI in the marketplace or its adoption within an organization, the challenges are similar.
Innovators seed this journey within an organization. The innovation is then embraced by early adopters, thanks to their initial enthusiasm and openness to change. But then, the pace slows down and enters a chasm. There often is a lack of visibility, uncertainty in outcomes, and broader resistance to change.
This is where most initiatives fail.
For an innovation like AI to cross this chasm and go mainstream, it needs leadership intervention. Leaders must make AI successful by aligning the initiative with their corporate vision. They must demonstrate economic value by institutionalizing conversations on ROI from AI. Finally, they must shape the organizational culture to facilitate change and enable the viral adoption of AI-driven decision making.

Photo by Paige Cody on Unsplash

Read More

Where do I start with AI?

Over the last 7+ years, I’ve worked with many leaders who were looking to embark on their first AI journey. Most of them did not know how or where to start. Some thought they did. In order for your business to realize the gains possible using AI, you need to make sure you start your journey on the right path. Here are some key things to consider when beginning your journey…

Read More

Benchmark M1 vs Xeon vs Core i5 vs K80 and T4

Since their launch in November, Apple Silicon M1 Macs are showing very impressive performances in many benchmarks. These new processors are so fast that many tests compare MacBook Air or Pro to high-end desktop computers instead of staying in the laptop range. It usually does not make sense in benchmark. But here things are different as M1 is faster than most of them for only a fraction of their energy consumption.

Read More

Customizing LSTMs with new features

Exploring new ideas such as resistance, strengthening and architectural restructuring.

Photo by Aaron Burden on Unsplash
LSTMs was and still is an essential part of the growth of machine learning. It allows for temporal pattern recognition, in the same way that convolutional networks allow for spatial pattern recognition. Ever since, it was proposed it has mostly remained the same. How much better could it be if we tried to change/experiment with its architecture and features?
A few months ago, I built a trainable LSTM from scratch. This will be used as the control for the experiment. I will be adding, removing, magnifying and downscaling certain features and observe their effect on convergence.
Let me explain what each of the features means specifically:

The input of a LSTM follows a certain sequence of operations, before being outputted. Here are the parts and how they work:
General Gate:
The general gate is a normal neural network that takes in the input into the LSTM. It kickstarts the process in which the data is processed through a set of different gates.
Select Gate:
The select gate is actually a twin neural network, that takes the same input as the general gate. Although all the gates are the same, it is the way in which it is connected that gives it its purpose.
The select gate has a sigmoid function as its activation function, meaning that all values in its output range between 0 and 1. This output is multiplied by the output from the general gate. Since both neural networks architecture is identical, the output shapes of the networks are the same. This means that each value outputted by the forget gate can be matched up with a value from the general gate.
This means that we can understand a 1 from the forget to gate to mean “select to increase weightage to output” and 0 to “no effect on output”, and every value in between to mean something between remembering and forgetting.
See? By observing the use of the sigmoid function and the multiply gate, we understand how the same neural network from the general gate can be repurposed to form the select gate.
Forget Gate:
The forget gate (wait for it…) shares its architecture with the general gate and the select gate. The magic (again) comes from the positioning and the activation function.
The forget gate has the same positioning and sigmoid activation as the select gate, but it also contains another part. It is multiplied by a value called collected possibilities. This is the value from the previous propagation of the network. This means that the forget gate becomes the forget gate, due to its ability to draw from its “memory”.
The output from this forget gate (counting the multiplication with the previous propagation) is added to the general output. This means that a similar set output from the previous propagation is added to the general output’s value, in the similar way current memories are tainted by previous memories.
While going through these gates, observe how these start to build an ability to access temporal features of the data.
Here is a comprehensive look at this process:

Photo from Wikipedia. Copyright
The input data is simultaneously fed into the three different neural networks.
The general output uses the hyperbolic tangent function to prevent exploding gradients.
The forget gate uses sigmoid and is multiplied by the collected possibilities. It is then added to the general output.
The select gate uses the sigmoid function and is then multiplied to the previous value (general output + forget gate).
A few important things to note:
The line in which the parts are added to the output of the general gate is called the cell state. This is important as when the network is adapted, you can understand how exactly each part is connected to each other.
When I said that the networks are identical, I am only referring to its configuration and not its weights.
Let’s build this vanilla LSTM network from scratch:
Step 1| Prerequisites:

import numpy from matplotlib import pyplot as pltdef sigmoid(x):return 1/(1+np.exp(-x))def sigmoid_p(x):return sigmoid(x)*(1 -sigmoid(x))def tanh(x):return np.tanh(x)def tanh_p(x):return 1.0 – np.tanh(x)**2def deriv_func(z,function):if function == sigmoid:return sigmoid_p(z)elif function == relu:return relu_p(z)elif function == tanh:return tanh_p(z)

For the program to work, it requires numpy for array manipulation and matplotlib for plotting the loss values. It also contains the mathematical definitions of the activation functions, as well as the derivatives of the functions.
Step 2| LSTM class:

class LSTM:def __init__(self,network):def plus_gate(x,y):return np.array(x) + np.array(y)def multiply_gate(x,y):return np.array(x) * np.array(y)class NeuralNetwork:def __init__(self,network):self.weights = []self.activations = []for layer in network:input_size = layer[0]output_size = layer[1]activation = layer[2]index = network.index(layer)if layer[3] == ‘RNN’:increment = network[-1][1]else:increment = 0self.weights.append(np.random.randn(input_size+increment,output_size))self.activations.append(activation)

The best programs contain a rigid structure. The LSTM class contains an init section that contains all the important part of the program. The adding gate and the multiply gate, as well as the start of the neural network class, so that it can easily be changed.
The neural network contains all of the definitions of the weights, as well as the activation functions at each layer.
Step 3| Propagation:

def propagate(self,data):input_data = dataZs = []As = []for i in range(len(self.weights)):z = np.dot(input_data,self.weights[i])if self.activations[i]:a = self.activations[i](z)else:a = zAs.append(a)Zs.append(z)input_data = areturn As,Zs

This section is self-explanatory: it is obvious that every neural network requires propagation for it to yield results. This is the basic propagation that allow perceptron-type neural networks share: matrix multiplication.
Step 4| Training:

def network_train(self, As,Zs,learning_rate,input_data,extended_gradient):As.insert(0,input_data)g_wm = [0] * len(self.weights)for z in range(len(g_wm)):a_1 = As[z].Tpre_req = extended_gradientz_index = 0weight_index = 0for i in range(0,z*-1 + len(network)):if i % 2 == 0:z_index -= 1if self.activations[z]:pre_req = pre_req * deriv_func(Zs[z_index],self.activations[z])else:pre_req = pre_req * Zs[z_index]else:weight_index -= 1pre_req = np.dot(pre_req,self.weights[weight_index].T)a_1 = np.reshape(a_1,(a_1.shape[0],1))pre_req = np.reshape(pre_req,(pre_req.shape[0],1))pre_req = np.dot(a_1,pre_req.T)g_wm[z] = pre_reqfor i in range(len(self.weights)):self.weights[i] += g_wm[i]*learning_rateself.plus_gate = plus_gateself.multiply_gate = multiply_gateself.recurrent_nn = NeuralNetwork(network)self.forget_nn = NeuralNetwork(network)self.select_nn = NeuralNetwork(network)

This training of the program is simple: it finds the partial derivative of each weight, with respect to the loss function. How does it bridge such a large mathematical gap? The program takes it step by step.
Firstly, it finds the partial derivative between the weight and the layer output, and then the layer output the network output, then the network output to the loss function. According to the chain rule, by multiplying all of these derivatives together, we should result in the derivative linking the weight to the loss function. The actual implementation is more complicated, as the computer needs to know which weight and layer it is working on, and on which order of the network it is currently on.
Step 5| Defining network parts:

def cell_state(self,input_data,memo,select):global rnn_As,rnn_Zsrnn_As,rnn_Zs = lstm.recurrent_nn.propagate(input_data)yhat_plus = tanh(rnn_As[-1])plus = self.plus_gate(yhat_plus,memo)collect_poss = plusyhat_mult = tanh(plus)mult = self.multiply_gate(yhat_mult,select)pred = multreturn pred,collect_possdef forget_gate(self,input_data,colposs):global forget_As,forget_Zsforget_As,forget_Zs = lstm.forget_nn.propagate(input_data)yhat_mult = sigmoid(forget_As[-1])mult = self.multiply_gate(colposs,yhat_mult)memo = multreturn memodef select_gate(self,input_data):global select_As,select_Zsselect_As,select_Zs = lstm.select_nn.propagate(input_data)yhat_mult = sigmoid(select_As[-1])select = yhat_multreturn select

This is the implementation of the parts that I have described above. It explains how the input should be handled in order to reach the output, so that it can be connected to cell state to return the output of the program.
Step 6| Defining full LSTM propagation:

def propagate(self,X,network):colposs = 1As = []for i in range(len(X)):input_data = X[i]if i == 0:increment = network[-1][1]input_data = list(input_data) + [0 for _ in range(increment)]else:input_data = list(input_data) + list(pred)input_data = np.array(input_data)memory = self.forget_gate(input_data,colposs)select = self.select_gate(input_data)pred,colposs = self.cell_state(input_data,memory,select)As.append(pred)return As

This section implements the previously described propagation process where each of the gates add their output to the cell state, where the final output is formed.
Step 7| Training the LSTM in full:

def train(self,X,y,network,iterations,learning_rate):colposs = 1loss_record = []for _ in range(iterations):for i in range(len(X)):input_data = X[i]if i == 0:increment = network[-1][1]input_data = list(input_data) + [0 for _ in range(increment)]else:input_data = list(input_data) + list(pred)input_data = np.array(input_data)memory = self.forget_gate(input_data,colposs)select = self.select_gate(input_data)pred,colposs = self.cell_state(input_data,memory,select)loss = sum(np.square(y[i]-pred).flatten())gloss_pred = (y[i]-pred)*2gpred_gcolposs = selectgpred_select = colpossgloss_select = gloss_pred * gpred_selectgpred_forget = select*sigmoid_p(colposs)*colpossgloss_forget = gloss_pred * gpred_forgetgpred_rnn = select*sigmoid_p(colposs)gloss_rnn = gloss_pred*gpred_rnnself.recurrent_nn.network_train(rnn_As,rnn_Zs,learning_rate,input_data,gloss_rnn)self.forget_nn.network_train(forget_As,forget_Zs,learning_rate,input_data,gloss_forget)self.select_nn.network_train(select_As,select_Zs,learning_rate,input_data,gloss_select)As = self.propagate(X,network)loss = sum(np.square(y[i]-pred))loss_record.append(loss)return loss_record

This part of the network is extremely complicated. This is because I need to calculate the partial derivative of each individual weight to the loss function of the LSTM, meaning that I need to calculate many partial derivatives, between the output and the loss function, between the output to each gate, between the gate to the neural network, from the network to each layer, and finally from each layer to each weight. It is just a lot of manual work to find the right variables together for the program to function.
Now that we have the vanilla LSTM, let’s talk about the features experimented with:

Resistors and strengtheners are one of the simplest features to add to the LSTMs. The resistors in a LSTMs works like one from an electrical circuit, it decreases the strength of a certain signal by a fixed amount. Let’s see what changes when we put the resistors at different parts of the LSTM.

def resistor(x):resistance = 1/resistor_strengthresist = np.full(x.shape, resistance)return x*resist

Pretty simple! Generate an array of values, in the same shape of the value that it will be applied upon. A value called resistor strength will be the baseline of how much the number will be offset. The higher the resistor strength, the lower the value.

self.resistor = resistor

Remember to add this value within the init function of the neural network so it can be referenced.

def forget_gate(self,input_data,colposs):global forget_As,forget_Zsforget_As,forget_Zs = lstm.forget_nn.propagate(input_data)yhat_mult = sigmoid(forget_As[-1])mult = self.multiply_gate(colposs,yhat_mult)memo = self.resistor(mult)return memo

I placed the resistor within the forget gate, to try and decrease the effect of the forget gate. This is not only testing the resistor’s affect, but where the resistor is placed is actually testing the importance of the value to the training of the network.
Let’s compare the loss of the LSTM on a generated dataset:

This is a graph of the resistance against the minimum loss required. Clearly, resistance does not work for convergence!

In fact, a negative resistance value (strengthening) is actually more effective than the neutral resistance of 1.
Have we concluded that resistance is bad for networks?
Observe this graph when the resistance is applied on the select gate instead:

def select_gate(self,input_data):global select_As,select_Zsselect_As,select_Zs = lstm.select_nn.propagate(input_data)yhat_mult = sigmoid(select_As[-1])select = self.resistor(yhat_mult)return select

The results are inconclusive. Like other parts of the LSTM the parts only work when putting them in particular parts of the network.

As I have stated already, that the function of the neural network is created by its positioning, relative to the other neural networks in the LSTM. I will no create an “ignore” gate, that should be able to perform its namesake function.
The ignore gate is placed in between the forget gate and the general RNN. It uses the sigmoid activation function, as well as a multiplication gate, in which it is multiplied against the general neural network’s output.
How does this give the LSTM the ability to forget? This creates a matrix of values, deemed as the “filtered possibilities”. The sigmoid function, similarly to the select function, acts as the neural network’s expression to its approval to exact values of the general neural network’s output. Since this is so low down in the cell state, the collected possibilities that are used for the memory are now based upon filtered possibilities. The LSTM has therefore gained the ability of selective memory.
Here is the code that I used to implement this:
Defining propagation of ignore gate:

def ignore_gate(self,input_data):global ignore_As,ignore_Zsignore_As,ignore_Zs = lstm.ignore_nn.propagate(input_data)ignore = sigmoid(ignore_As[-1])return ignore

Adding to the init function of the LSTM:

self.plus_gate = plus_gateself.multiply_gate = multiply_gateself.recurrent_nn = NeuralNetwork(network)self.forget_nn = NeuralNetwork(network)self.select_nn = NeuralNetwork(network)self.ignore_nn = NeuralNetwork(network)self.resistor = resistor

Adding to LSTM propagation function:

def propagate(self,X,network):colposs = 1As = []for i in range(len(X)):input_data = X[i]if i == 0:increment = network[-1][1]input_data = list(input_data) + [0 for _ in range(increment)]else:input_data = list(input_data) + list(pred)input_data = np.array(input_data)ignore = self.ignore_gate(input_data)memory = self.forget_gate(input_data,colposs)select = self.select_gate(input_data)pred,colposs = self.cell_state(input_data,ignore,memory,select)As.append(pred)return As

Adding to the training function (new section with manually calculated derivatives):

def train(self,X,y,network,iterations,learning_rate):colposs = 1loss_record = []for _ in range(iterations):for i in range(len(X)):input_data = X[i]if i == 0:increment = network[-1][1]input_data = list(input_data) + [0 for _ in range(increment)]else:input_data = list(input_data) + list(pred)input_data = np.array(input_data)ignore = self.ignore_gate(input_data)memory = self.forget_gate(input_data,colposs)select = self.select_gate(input_data)pred,colposs = self.cell_state(input_data,ignore,memory,select)loss = sum(np.square(y[i]-pred).flatten())gloss_pred = (y[i]-pred)*2gpred_gcolposs = selectgpred_select = colpossgloss_select = gloss_pred * gpred_selectgpred_forget = select*sigmoid_p(colposs)gloss_forget = gloss_pred * gpred_forgetgpred_ignore = select*sigmoid_p(colposs)*yhat_multgloss_ignore = gloss_pred * gpred_ignoregpred_rnn = select*sigmoid_p(colposs)*ignoregloss_rnn = gloss_pred*gpred_rnnself.recurrent_nn.network_train(rnn_As,rnn_Zs,learning_rate,input_data,gloss_rnn)self.ignore_nn.network_train(select_As,select_Zs,learning_rate,input_data,gloss_select)self.forget_nn.network_train(forget_As,forget_Zs,learning_rate,input_data,gloss_forget)self.select_nn.network_train(select_As,select_Zs,learning_rate,input_data,gloss_select)As = self.propagate(X,network)loss = sum(np.square(y[i]-pred))loss_record.append(loss)return loss_record

What I have done with resistance, strengthening and the architectural restructuring of the LSTM is just the beginning! I am sure that you will have the creativity and intuition to truly do something good with the basic framework that I have given you in this article. You might even be able to come up with some features!
Thank you for reading my article!

Read More

The Great British Baking Show: Random Forests Edition

If only machine learning could be as delicious as it is oftentimes perplexing. While learning data science concepts, I’ve found that it’s a good idea to approach it the same way as learning baking: start simple, grab the basic ingredients, and combine them together yourself one by one until you get a feel for it. APIs are a wonderful resource, but if all you know how to do is use them, it’s a bit like buying muffins and a can of frosting to make cupcakes and calling yourself a baker. Welcome to the tent.

Read More

Using Algorithms derived From Neuroscience research, Numenta demonstrates 50x speed improvements on Deep Learning Networks

Numenta has made some advances by applying a principle of the brain called sparsity. It compared sparse networks and dense networks by running its algorithms on Xilinx FPGAs (Field Programmable Gate Array) for a speech recognition task that used the Google Speech Commands (GSC) dataset.

Read More

Embold: Static Code Analyzer uses AI to help Developers analyze and improve their code

Embold is a simple but efficient AI-based static code analyzer that can help developers analyze and improve their code. The feature that truly makes it stand apart is its ability to analyze source code across four dimensions: code issues, design issues, metrics and duplication, and surface issues that impact stability, robustness, security, and maintainability.

Read More

Cato Networks introduces a new AI System to eliminate false positives in Security Systems

Cato Networks Ltd. has recently introduced a machine learning system that combines threat intelligence and real-time network information, eliminating the false positive (FP) alerts, thereby reducing the cybersecurity team’s work. 

Read More

Intel to acquire SigOpt, an AI Hyperparameter Optimization Platform

Intel has confirmed that it is buying SigOpt Inc., an artificial intelligence startup developing software platforms to optimize AI models. Several private firms and research groups such as OpenAI use these software platforms to boost their AI models’ performance.

Read More

Microsoft introduces Lobe: A free Machine Learning application that allows you to create AI Models without coding

Microsoft has released Lobe, a free desktop application that lets Windows and Mac users create customized AI models without writing any code. Several customers are already using the app for tracking tourist activity around coral reefs, the company said.

Read More

Google introduces new version of Google Analytics powered by Machine Learning

Google comes up with a refresh of Google Analytics (i.e., Google Analytics 4) with new machine learning prediction features, including extra privacy controls and a streamlined interface. 

Read More

Entropy application in the Stock Market

A lot of definitions and formulations of entropy are available. What in general is true is that entropy is used to measure information, surprise, or uncertainty regarding experiments’ possible outcomes. In particular, Shannon entropy is the one that is used most frequently in statistics and machine learning. For this reason, it’s the focus of our attention here.

Read More
1 2