With data coming out of China, which had a four-month head start in the race to beat the pandemic. If machine-learning algorithms could be trained on that data to help doctors understand what they were seeing and make decisions, it just might save lives. “I thought, ‘If there’s any time that AI could prove its usefulness, it’s now,’” says Wynants. “I had my hopes up.” It never happened—but not for lack of effort.Read More
PayPal’s plan to morph itself into a “super app” have been given a go for launch. According to PayPal CEO Dan Schulman, speaking to investors during this week’s second-quarter earnings, the initial version of PayPal’s new consumer digital wallet app is now “code complete” and the company is preparing to slowly ramp up. Over the next several months, PayPal expects to be fully ramped in the U.S., with new payment services, financial services, commerce and shopping tools arriving every quarter.Read More
The tabular data will not give us as much information as it contains; the messy format and large numbers of entries make it difficult to do further analysis. So here comes the birth of different data visualisation tools and techniques. Data visualization is the art of presenting data in different graphical charts so that non-technical people can understand it easily. Using a perfect combination of elements like colors, dimensions, and labels can create a masterpiece of visual reports that can reveal surprising insights, making businesses more growth.Read More
Imagine you woke up late for a video meeting with tousled hair and crumpled clothes. Then, without any care in the world, you switched on your laptop and turned on the webcam, and boom — you are looking perfectly formal for that morning meeting. No, this is not some sorcery, as you may be speculating, but NVIDIA’s Vid2Vid Cameo, powered by NVIDIA’s Maxine.Read More
IBM recently launched a new machine learning, end-to-end pipeline starter kit to help developers and data scientists to build machine learning applications and deploy them quickly in a cloud-native environment. The starter kit is part of the IBM Cloud-Native Toolkit–an open-source collection of assets that provide an environment for developing cloud-native applications for deployment within Kubernetes and Red Hat OpenShift. Assets created with the Cloud-Native Toolkit can be deployed in any cloud or hybrid cloud environment.Read More
OpenAI today released Triton, an open source, Python-like programming language that enables researchers to write highly efficient GPU code for AI workloads. Triton makes it possible to reach peak hardware performance with relatively little effort, OpenAI claims, producing code on par with what an expert could achieve in as few as 25 lines.Read More
In this decade, companies across the globe have embraced the potential of artificial intelligence for digital transformation and enhanced customer experience. One important application of AI is enabling companies to use the pools of data available with them for smart business use. BMW is one of the world’s leading manufacturers of premium automobiles and mobility services. BMW uses artificial intelligence in critical areas like production, research and development, and customer service. BMW also runs a project dedicated to this technology called Project AI, for efficient use of artificial intelligence.Read More
As a not-so-distant future that includes space tourism and people living off-planet approaches, the MIT Media Lab Space Exploration Initiative is designing and researching the activities humans will pursue in new, weightless environments.
Since 2017, the Space Exploration Initiative (SEI) has orchestrated regular parabolic flights through the ZERO-G Research Program to test experiments that rely on microgravity. This May, the SEI supported researchers from the Media Lab; MIT’s departments of Aeronautics and Astronautics (AeroAstro), Earth, Atmospheric and Planetary Sciences (EAPS), and Mechanical Engineering; MIT Kavli Institute; the MIT Program in Art, Culture, and Technology; the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL); the John A. Paulson School of Engineering and Applied Sciences (SEAS) at Harvard University; the Center for Collaborative Arts and Media at Yale University; the multi-affiliated Szostak Laboratory, and the Harvard-MIT Program in Health Sciences and Technology to fly 22 different projects exploring research as diverse as fermentation, reconfigurable space structures, and the search for life in space.
Most of these projects resulted from the 2019 or 2020 iterations of MAS.838 / 16.88 (Prototyping Our Space Future) taught by Ariel Ekblaw, SEI founder and director, who began teaching the class in 2018. (Due to the Covid-19 pandemic, the 2020 flight was postponed, leading to two cohorts being flown this year.)
“The course is intentionally titled ‘Prototyping our Sci-Fi Space Future,’” she says, “because this flight opportunity that SEI wrangles, for labs across MIT, is meant to incubate and curate the future artifacts for life in space and robotic exploration — bringing the Media Lab’s uniqueness, magic, and creativity into the process.”
The class prepares researchers for the realities of parabolic flights, which involves conducting experiments in short, 20-second bursts of zero gravity. As the course continues to offer hands-on research and logistical preparation, and as more of these flights are executed, the projects themselves are demonstrating increasing ambition and maturity.
“Some students are repeat flyers who have matured their experiments, and [other experiments] come from researchers across the MIT campus from a record number of MIT departments, labs, and centers, and some included alumni and other external collaborators,” says Maria T. Zuber, MIT’s vice president for research and SEI faculty advisor. “In short, there was stiff competition to be selected, and some of the experiments are sufficiently far along that they’ll soon be suitable for spaceflight.”
Dream big, design bold
Both the 2020 and 2021 flight cohorts included daring new experiments that speak to SEI’s unique focus on research across disciplines. Some look to capitalize on the advantages of microgravity, while others seek to help find ways of living and working without the force that governs every moment of life on Earth.
Che-Wei Wang, Sands Fish, and Mehak Sarang from SEI collaborated on Zenolith, a free-flying pointing device to orient space travelers in the universe — or, as the research team puts it, a 3D space compass. “We were able to perform some maneuvers in zero gravity and confirm that our control system was functioning quite well, the first step towards having the device point to any spot in the solar system,” says Sarang. “We’ll still have to tweak the design as we work towards our ultimate goal of sending the device to the International Space Station!”
Then there’s the Gravity Loading Countermeasure Skinsuit project by Rachel Bellisle, a doctoral student in the Harvard-MIT Program in Health Sciences and Technology and a Draper Fellow. The Skinsuit is designed to replicate the effects of Earth gravity for use in exercise on future missions to the moon or to Mars, and to further attenuate microgravity-induced physiological effects in current ISS mission scenarios. The suit has a 10-plus-year history of development at MIT and internationally, with prior parabolic flight experiments. Skinsuit originated in the lab of Dava Newman, who now serves as Media Lab director.
“Designing, flying, and testing an actual prototype is the best way that I know of to prepare our suit designs for actual long-term spaceflight missions,” says Newman. “And flying in microgravity and partial gravity on the ZERO-G plane is a blast!”
Alongside the Skinsuit are two more projects flown this spring that involve wearables and suit prototypes: the Peristaltic Suit developed by Media Lab researcher Irmandy Wicaksono and the Bio-Digital Wearables or Space Health Enhancement project by Media Lab researcher Pat Pataranutaporn.
“Wearables have the potential to play a critical role in monitoring, supporting, and sustaining human life in space, lessening the need for human medical expert intervention,” Pataranutaporn says. “Also, having this microgravity experience after our SpaceCHI workshop … gave me so many ideas for thinking about other on-body systems that can augment humans in space — that I don’t think I would get from just reading a research paper.”
AgriFuge, from Somayajulu Dhulipala and Manwei Chan (graduate students in MIT’s departments of Mechanical Engineering and AeroAstro, respectively), offers future astronauts a rotating plant habitat that provides simulated gravity as well as a controllable irrigation system. AgriFuge anticipates a future of long-duration missions where the crew will grow their own plants — to replenish oxygen and food, as well as for the psychological benefits of caring for plants. Two more cooking-related projects that flew this spring include H0TP0T, by Larissa Zhou from Harvard SEAS, and Gravity Proof, by Maggie Coblentz of the SEI — each of which help demonstrate a growing portfolio of practical “life in space” research being tested on these flights.
The human touch
In addition to the increasingly ambitious and sophisticated individual projects, an emerging theme in SEI’s microgravity endeavor is a focus on approaches to different aspects of life and culture in space — not only in relation to cooking, but also architecture, music, and art.
Sanjana Sharma of the SEI flew her Fluid Expressions project this spring, which centers around the design of a memory capsule that functions as both a traveler’s painting kit for space and an embodied, material reminder of home. During the flight, she was able to produce three abstract watercolor paintings. “The most important part of this experience for me,” she says, “was the ability to develop a sense of what zero gravity actually feels like, as well as how the motions associated with painting differ during weightlessness.”
Ekblaw has been mentoring two new architectural projects as part of the SEI’s portfolio, building on her own TESSERAE work for in-space self-assembly: Self Assembling Space Frames by SEI’s Che-Wei Wang and Reconfigurable space structures by Martin Nisser of MIT CSAIL. Wang envisions his project as a way to build private spaces in zero-gravity environments. “You could think of it like a pop-up tent for space,” he says. “The concept can potentially scale to much larger structures that self-assemble in space, outside space stations.”
Onward and upward
Two projects that explore different notions of the search for life in space include Ø-scillation, a collaboration between several scientists at the MIT Kavli Institute, Media Lab, EAPS, and Harvard; and the Electronic Life-detection Instrument (ELI) by Chris Carr, former MIT EAPS researcher and current Georgia Tech faculty member, and Daniel Duzdevich, a postdoc at the Szostak Laboratory.
The ELI project is a continuation of work within Zuber’s lab, and has been flown on previous flights. “Broadly, our goals are to build a low-mass life-detection instrument capable of detecting life as we know it — or as we don’t know it,” says Carr. During the 2021 flight, the researchers tested upgraded hardware that permits automatic real-time sub-nanometer gap control to improve the measurement fidelity of the system — with generally successful results.
Microgravity Hybrid Extrusion, led by SEI’s mission integrator, Sean Auffinger, alongside Ekblaw, Nisser, Wang, and MIT Undergraduate Research Opportunities Program student Aiden Padilla, was tested on both flights this spring and works toward building in situ, large-scale space structures — it’s also one of the selected projects being flown on an ISS mission in December 2021. The SEI is also planning a prospective “Astronaut Interaction” mission on the ISS in 2022, where artifacts like Zenolith will have the chance to be manipulated by astronauts directly.
This is a momentous fifth anniversary year for SEI. As these annual flights continue, and the experiments aboard them keep growing more advanced, researchers are setting their sights higher — toward designing and preparing for the future of interplanetary civilization.Read More
The data science lifecycle (DLSC) has been defined as an iterative process that leads from problem formulation to exploration, algorithmic analysis and data cleaning to obtaining a verifiable solution that can be used for decision making. For companies creating models to scale, an enterprise Machine Learning Operation (MLOps) platform not only needs to support enterprise-grade development and production, it needs to follow the same standard process that data scientists use.Read More
Machine learning (ML)—the ability for machines to perceive, learn from, abstract, and act on data—has been a catalyst for innovation and advancement across sectors, with national security being no exception. In the last year alone, there have been several prime examples of the enormous opportunity ML offers regarding artificial intelligence (AI) for defense and the intelligence community. The U.S. Department of Defense (DoD) is continuing efforts to scale AI and celebrating new achievements, like using AI to help control a U-2 “Dragon Lady” reconnaissance aircraft – the first time AI has been put in command of a U.S. military system.
The possibilities for advancement are endless: by helping with tasks related to data collection, processing, and analysis, ML can catch cyber breaches and hacks before humans can, speed up responses to electronic warfare attacks, and more closely target responses to kinetic fire through its continual updating and learning capabilities. Warfighters can also use ML to look across domains and resources, from ships to artillery, to match targets to resources.
As we settle into 2021, there’s one aspect of AI/ML that should not be overlooked: how to effectively get it into the hands of warfighters at the tactical edge, where fast decisions are at a premium and compute power and connectivity are often scarce. It is critical that these edge use cases characterize and shape planning for AI and ML-driven investment as digitization continues to accelerate the pace of war.Read More
The versatility of a data labeling tool can make or break your data quality. And the data quality can make or break your algorithms. And what happens when our algorithms misinterpret or fail? — Karthik Vasudevan, Founder at Traindata Inc. This post will guide you to ask five questions to help you choose the best data labeling tool.Read More
Under new management, Intel aims to recapture a crown that it owned for decades and regain technology leadership in manufacturing chips by 2025. This will be challenging, as the company has to invest tens of billions of dollars and get its technology right in the wake of numerous missteps, but new CEO Pat Gelsinger said at an event that the big chipmaker is accelerating its investments in manufacturing processes and packaging innovations.Read More
Google appears to be testing a new feature that allows users to add themselves and parties to the waitlists of restaurants that would normally require a phone call. Powered by Duplex, Google’s AI-driven natural language processing technology that can converse with business owners over the phone, the waitlist capability could benefit hospitality organizations facing surges in traffic as pandemic fears abate.Read More
In the first half of the last century, industrial robots such as the hulking one-armed Goliaths dominated the robotics space. Even though they were highly disruptive and served many human purposes, industrial robots were not sexy. Opening the door to the second half of the 20th century, attractive humanoid robots made their debut. Today, more and more robots are cropping up in offices, hospitals, and schools, and especially in labour intense workplaces like warehouses, fulfilment centres, and small manufacturing centres.Read More
Few people know and attentive for this field of Artificial Intelligence and meanwhile is a novel and promissor research area within Deep learning, nowadays is usually to see and not realize technologies which are using ASR and your strives are bigger than ever, Automatic Speech Recognition is a technology that be able recognize a voice, sound or a signal across acoustic waves, sums up the machine recognize your voice and transform it into a text transcription, we’ve seen several high level products and services that itself use ASR behind the hood, I can cite Siri a virtual assistent at Apple, Alexa belongs an Amazon another virtual assistent, Google Home, and so on.Read More
Reformist usage of Artificial Intelligence programming on the lookout for computerization of the methods, programming, and different purposes has gotten normal. Artificial intelligence based stages include significant machine estimations and learning for robotizing business measures. Mechanization saves a ton of time and energy of the representatives. Computerization engages the association to play out the functioning even more proficiently and beneficially. Moreover, robotization assists people with refreshing their abilities and capacitiesRead More
Since the days of Aristotle in the West, humans have used classification to help organize their experience in the world. In the retail world, a good taxonomy has helped us organize and group our products into similar categories so that the products are easy to find.Read More
We were all so wrong thinking about the future, we are wrong assuming that, Artificial Intelligence will dominate the future. We missed it, and fail to see our dependency on the machines.Read More
Google-owned DeepMind has pushed the limits of Artificial intelligence. One of the first introductions most people had to DeepMind was through AlphaZero . “AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi (Japanese chess) as well as Go, and convincingly defeated a world-champion program in each case.” AlphaZero achieved all of this through a process called reinforcement learning, basically playing repeated games against itself until it identified winning strategies.Read More
When you build machine learning models, it’s common to run dozens or hundreds of experiments to find the correct input data, parameters, and algorithm. The more experiments you run, the harder it gets to remember what works and what doesn’t.Read More