[Paper Summary] Washington University Researchers propose a Deep Learning Model that automates Brain Tumor Classification

Biopsies are always the first call when it comes to diagnosing a case of brain cancer. Surgeons start by removing a thin layer of tissue from the tumor to find signs of disease closely under a microscope. Although biopsies are very presumptuous, the samples collected only represent a snatch of the whole tumor. MRI is a less bold but time-consuming process as radiologists have to manually map out the tumor area from the scan before the classification. 

Read More

[Paper Summary] A new Google AI Research Study discovers Anomalous Data using Self Supervised Learning

New Google AI research introduces a 2-stage framework that uses recent progress on self-supervised representation learning and classic one-class algorithms. This framework is simple to train and shows SOTA performance on various benchmarks, including CIFAR, f-MNIST, Cat vs. Dog, and CelebA. Following that, they offer a novel representation learning approach for a practical industrial defect detection problem using the same architecture. On the MVTec benchmark, the framework achieves a new state-of-the-art.

Read More

[Paper Summary] An AI system trained by Loughborough University researchers recognizes the pre-movement patterns from an EEG

A group of researchers from the Intelligent Automation Center at Loughborough University has published a research paper focussed on possible results for training robots to ferret out the intention of arm movement before humans articulate the movement.

Read More

[Paper Summary] Stanford Researchers use Deep Learning to predict Biological Structures, like RNAs, more accurately than ever before

Determination of 3D structures of biological molecules, like RNA’s, is difficult and often requires millions of dollars for such extensive efforts. Stanford University researchers have devised a new deep learning algorithm called ARES (Atomic Rotationally Equivalent Scorer) for overcoming this challenge by computationally forecasting accurate structures. 

Read More

[Paper Summary] Google and Mayo Clinic Researchers propose a new AI Algorithm to improve Brain Stimulation devices to treat disease

lectrical simulation has the potential to widen treatment possibilities for millions of people with movement disorders, such as Parkinson’s disease, and epilepsy. In the future, this technology may help further treat psychiatric illness or even assist in recovery from brain injuries like stroke.

Read More

[Paper Summary] AI Researchers from ShanghaiTech and UC San Diego introduce SofGAN: A Portrait Image Generator with Dynamic Styling

Researchers in Shanghai and the United States have created a GAN-based portrait creation system that lets users build new faces with previously unattainable levels of control over specific features, including hair, eyes, spectacles, textures, and color.

Read More

[Paper Summary] Deepmind introduces PonderNet, a new AI Algorithm that allows Artificial Neural Networks to learn to “think for a while” before answering

Deepmind introduces PonderNet, a new algorithm that allows artificial neural networks to learn to think for a while before answering. This improves the ability of these neural networks to generalize outside of their training distribution and answer tough questions with more confidence than ever before.

Read More

Snapchat shows how it uses GPUs to accelerate Machine Learning (ML) Inferences

Machine learning (ML) and Artificial intelligence (AI) have transformed how industries make business decisions. Many firms are now leveraging ML and AI to compile consumer data and analyze and predict future consumer behaviour. This has allowed them to process high volumes of data rapidly and accurately and analyze valuable insights to take promising actions for their business. 

Read More

[Paper Summary] This new Study shows Artificial Neural Networks (ANN) based on Human Brain Connectivity can perform Cognitive Tasks efficiently

Possibly a new breakthrough has been achieved in the domain of artificial intelligence. According to a new study, by a team of researchers from The Neuro (Montreal Neurological Institute-Hospital) and the Quebec Artificial Intelligence Institute, the artificial intelligence networks modelled on human brain connectivity are equipped to perform cognitive tasks efficiently and effectively. The study has been done via a sizable Open Science Repository by which the researchers tried to replicate and reconstruct the brain’s connectivity pattern. This was then applied to an artificial neural network (ANN) to achieve cognitive abilities like the human brain.

Read More

[Paper Summary] IBM and Earlham Institute Researchers demonstrate the power of AI and Machine Learning (ML) based models for deeper insight into the Circadian Clock

The research on the power of AI and machine learning-based approaches for better understanding the circadian clock and its regulation has been published by the IBM and Earlham Institute scientists in Proceedings of the National Academy of Sciences of the United States of America (PNAS). Anyone who has gone a long distance by plane will tell you that jetlag is the most frustrating part of the trip. While there are numerous ways to deceive the body, it’s challenging to go against our natural, inner rhythm, which governs our 24-hour sleep-wake cycles.

Read More

[Paper Summary w. Video] OpenAI Releases An Improved Version Of Its Codex AI Model

Today OpenAI is releasing a new and improved version of its Codex AI model to the public. Codex is a descendant of OpenAI’s GPT-3, which was released last summer. While Codex shares the same data as its predecessor, it has an added advantage in that it can read and then complete text prompts submitted by a human user. The Codex is like the GPT-3 language engine, but it was only trained on coding.

Read More

[Paper Summary] DGIST Team introduces, ‘DRANet’, an AI Neural Network module that can separate and convert Environmental Information

As a result of recent advances in Deep Learning (DL), deep learning neural networks (DNN) have been widely used to improve model performance in computer vision, natural language processing, and more. However, existing domain adaptation methods learn only associated features that share a domain. Thus domain gaps between data significantly degrade the existing model performance.

Read More

[Paper Summary] IBM and MJFF’s latest AI Research uses Machine Learning to predict progression of Parkinson’s Disease

New research by IBM researchers and Michael J. Fox Foundation for Parkinson’s Research (MJFF) published in Lancet Digital Health details an AI model that predicts the progression of Parkinson’s disease. The model groups typical symptom patterns and examines longitudinal patient data to predict how quickly symptoms will progress over time, focusing on timing and severity.

Read More

[Paper Summary] Facebook AI releases ‘VoxPopuli’, a large-scale open multilingual Speech Corpus for AI translations in NLP Systems

With the wide-scale use of speech recognition and translation technologies, these AI systems can be implemented in many different languages. But at this point, they are only available for a handful of widely spoken languages like English or Mandarin – there’s still plenty to do before it will work with all 6,500+ other human tongues.

Read More

NVIDIA and King’s College London uses Cambridge-1 to build AI Models to generate synthetic brain images

Cambridge 1 AI supercomputer

NVIDIA and King’s College London have revealed new information about one of the first projects to be run on Cambridge-1, the UK’s most powerful supercomputer. The UK’s most powerful supercomputer, Cambridge-1, was announced in October last year and cost $100 million to build.

Read More

[Paper Summary] DeepMind Researchers introduce Epistemic Neural Networks (ENNs) for Uncertainty Modeling in Deep Learning

Deep learning algorithms are widely used in numerous AI applications because of their flexibility and computational scalability, making them suitable for complex applications. However, most deep learning methods today neglect epistemic uncertainty related to knowledge which is crucial for safe and fair AI. A new DeepMind study has provided a way for quantifying epistemic uncertainty, along with new perspectives on existing methods, all to improve our statistical knowledge of deep learning.

Read More

[Paper Summary] Scientists have created a new Tool ‘Storywrangler’ that can explore billions of Social Media messages in order to Predict Future Conflicts and Turmoil

Scientists have recently invented an instrument to divulge deeper into the billions of posts made on Twitter since 2008. The new tool is capable of providing an unprecedented, minute-by-minute view of popularity. The research was carried out by a team at the University of Vermont. The team calls the instrument the Storywrangler. 

Read More

[Paper] Google AI introduces a pull-push Denoising Algorithm and Polyblur: A Deblurring Method that eliminates noise and blur in images

Despite the advances in imaging technology, image noise and restricted sharpness remain most critical factors for increasing the visual quality of an image. Noise can be linked to the particle nature of light, or electronic components may be introduced during the read-out process. A photographic editing process will then process the captured noisy signal via the camera image processor (ISP) and be enhanced, amplified, and distorted. Image blur may be caused by a wide range of phenomena, from inadvertent camera shake during capture, incorrect camera focusing, or sensor resolution due to the final lens aperture.

Read More

[Paper] OpenAI reveals details about its Deep Learning Model ‘Codex’: The backbone of Github’s ‘Copilot’ Tool

In their latest paper, researchers at OpenAI reveal details about a deep learning model called Codex. This technology is the backbone of Copilot, an AI pair programmer tool jointly developed by GitHub and OpenAI that’s currently available in beta to select users. The paper explores the process of repurposing their flagship language model GPT-3 to create Codex, as well as how far you can trust deep learning in programming. The OpenAI scientists managed this feat by using a series of new approaches that proved successful with previous models.

Read More
1 2 3