ACM name Alfred Vaino Aho and Jeffrey David Ullman recipients of the 2020 ACM A.M. Turing Award

ACM named Alfred Vaino Aho and Jeffrey David Ullman recipients of the 2020 ACM A.M. Turing Award for fundamental algorithms and theory underlying programming language implementation

ACM, Association for Computing Machinery, named Alfred Vaino Aho and Jeffrey David Ullman recipients of the 2020 ACM A.M. Turing Award for fundamental algorithms and theory underlying programming language implementation and for synthesizing these results and those of others in their highly influential books, which educated generations of computer scientists.

Read More

[Paper] Is AI Learning to Understand Emotions through Visual Art?

Emotional AI

Emotional AI is not far away considering the nascent developments in the field.
Artificial intelligence has already made its mark in our lives. The adoption of disruptive technologies redefined industries and their operations. However, the fear looming over AI, rooted in it being capable of taking over the human race has existed since the start. And most of us would have been influenced by those Sci-fi movies and books, which portray AI as an evil entity, talking and behaving like humans.

Read More

Rust detection using machine learning on AWS

Visual inspection of industrial environments is common requirement across heavy industries. Many of these industries deal with huge metal surfaces and harsh environments. A common problem across these industries is metal corrosion and rust. In this post, we describe how to build a serverless pipeline to create ML models for corrosion detection using Amazon SageMaker and other AWS services. The result is a fully functioning app to help you detect metal corrosion. We will use the following AWS services.

Read More

Use of Synthetic Data, in early stage, seen as an answer to Data Bias 

Assuring that the huge volumes of data on which many AI applications rely is not biased and complies with restrictive data privacy regulations is a challenge that a new industry is positioning to address: synthetic data production. Synthetic data is computer-generated data that can be used as a substitute for data from the real world.

Read More

Drug ranking using machine learning systematically predicts the efficacy of anti-cancer drugs

Drug efficacy prediction

Artificial intelligence and machine learning (ML) promise to transform cancer therapies by accurately predicting the most appropriate therapies to treat individual patients. Here, we present an approach, named Drug Ranking Using ML (DRUML), which uses omics data to produce ordered lists of >400 drugs based on their anti-proliferative efficacy in cancer cells.

Read More

Guide to Interactive Image Synthesis with Anycost GANs

Generative adversarial networks (GANs) have become exceedingly good at photorealistic image synthesis from randomly sampled latent codes. Additionally, the generated output images can be easily transformed/edited(e.g.,  adding a smile or glasses) by tweaking the latent code. However, due to…

Read More

AWS and Hugging Face collaborate to simplify and accelerate adoption of Natural Language Processing Models

Just like computer vision a few years ago, the decade-old field of natural language processing (NLP) is experiencing a fascinating renaissance. Not a month goes by without a new breakthrough! Indeed, thanks to the scalability and cost-efficiency of cloud-based infrastructure, researchers are finally able to train complex deep learning models on very large text datasets, in order to solve business problems such as question answering, sentence comparison, or text summarization.

Read More

Why AI struggles to grasp cause and effect

Robot scientist

When you look at the following short video sequence, you can make inferences about causal relations between different elements. For instance, you can see the bat and the baseball player’s arm moving in unison, but you also know that it is the player’s arm that is causing the bat’s movement and not the other way around. You also don’t need to be told that the bat is causing the sudden change in the ball’s direction.

Read More

Model Scaling that’s both accurate and fast: Facebook AI…

The past several years have seen the rapid development of new hardware for training and running convolutional neural networks. Highly-parallel hardware accelerators such as GPUs and TPUs have enabled machine learning researchers to design and train more complex and accurate neural networks that can be employed in more complex real-life applications.

Read More

Trending toward concept Building – A review of Model Interpretability for Deep Neural Networks

Explaining how deep neural networks work is hard to do. It is an active area of research in academia and industry. Data scientists need to stay current in order to create models that are safe and usable. Leaders need to know how to avoid the risk of unethical, biased, or misunderstood models. In this post, I breakdown trends in network interpretability applied to image data. Some of the approaches covered apply to non-image-based networks as well. 

Read More

A complete Logistic Regression Algorithm for Image Classification in Python from scratch

Detailed layout of a logistic regression algorithm with a project. Logistic regression is very popular in machine learning and statistics. It can work on both binary and multiclass classification very well. I wrote tutorials on both binary and multiclass classification with logistic regression before. This article will be focused on image classification with logistic regression.

Read More

A new Lens on understanding Generalization in Deep Learning

Understanding generalization is one of the fundamental unsolved problems in deep learning. Why does optimizing a model on a finite set of training data lead to good performance on a held-out test set? This problem has been studied extensively in machine learning, with a rich history going back more than 50 years. There are now many mathematical tools that help researchers understand generalization in certain models.

Read More

Federated Learning: A decentralized form of Machine Learning

Most major consumer tech companies that are focused on AI and machine learning now use federated learning – a form of machine learning that trains algorithms on devices distributed across a network, without the need for data to leave each device. Given the increasing awareness of privacy issues, federated learning could become the preferred method of machine learning for use cases that use sensitive data (such as location, financial, or health data).

Read More
1 2 3 23