How BMW powers its processes with Artificial Intelligence

BMW AI design

In this decade, companies across the globe have embraced the potential of artificial intelligence for digital transformation and enhanced customer experience. One important application of AI is enabling companies to use the pools of data available with them for smart business use. BMW is one of the world’s leading manufacturers of premium automobiles and mobility services. BMW uses artificial intelligence in critical areas like production, research and development, and customer service. BMW also runs a project dedicated to this technology called Project AI, for efficient use of artificial intelligence.

Read More

Life in space: Preparing for an increasingly tangible reality

Prepare for space

As a not-so-distant future that includes space tourism and people living off-planet approaches, the MIT Media Lab Space Exploration Initiative is designing and researching the activities humans will pursue in new, weightless environments. 

Since 2017, the Space Exploration Initiative (SEI) has orchestrated regular parabolic flights through the ZERO-G Research Program to test experiments that rely on microgravity. This May, the SEI supported researchers from the Media Lab; MIT’s departments of Aeronautics and Astronautics (AeroAstro), Earth, Atmospheric and Planetary Sciences (EAPS), and Mechanical Engineering; MIT Kavli Institute; the MIT Program in Art, Culture, and Technology; the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL); the John A. Paulson School of Engineering and Applied Sciences (SEAS) at Harvard University; the Center for Collaborative Arts and Media at Yale University; the multi-affiliated Szostak Laboratory, and the Harvard-MIT Program in Health Sciences and Technology to fly 22 different projects exploring research as diverse as fermentation, reconfigurable space structures, and the search for life in space. 

Most of these projects resulted from the 2019 or 2020 iterations of MAS.838 / 16.88 (Prototyping Our Space Future) taught by Ariel Ekblaw, SEI founder and director, who began teaching the class in 2018. (Due to the Covid-19 pandemic, the 2020 flight was postponed, leading to two cohorts being flown this year.)

“The course is intentionally titled ‘Prototyping our Sci-Fi Space Future,’” she says, “because this flight opportunity that SEI wrangles, for labs across MIT, is meant to incubate and curate the future artifacts for life in space and robotic exploration — bringing the Media Lab’s uniqueness, magic, and creativity into the process.” 

The class prepares researchers for the realities of parabolic flights, which involves conducting experiments in short, 20-second bursts of zero gravity. As the course continues to offer hands-on research and logistical preparation, and as more of these flights are executed, the projects themselves are demonstrating increasing ambition and maturity. 

“Some students are repeat flyers who have matured their experiments, and [other experiments] come from researchers across the MIT campus from a record number of MIT departments, labs, and centers, and some included alumni and other external collaborators,” says Maria T. Zuber, MIT’s vice president for research and SEI faculty advisor. “In short, there was stiff competition to be selected, and some of the experiments are sufficiently far along that they’ll soon be suitable for spaceflight.” 

Dream big, design bold 

Both the 2020 and 2021 flight cohorts included daring new experiments that speak to SEI’s unique focus on research across disciplines. Some look to capitalize on the advantages of microgravity, while others seek to help find ways of living and working without the force that governs every moment of life on Earth. 

Che-Wei Wang, Sands Fish, and Mehak Sarang from SEI collaborated on Zenolith, a free-flying pointing device to orient space travelers in the universe — or, as the research team puts it, a 3D space compass. “We were able to perform some maneuvers in zero gravity and confirm that our control system was functioning quite well, the first step towards having the device point to any spot in the solar system,” says Sarang. “We’ll still have to tweak the design as we work towards our ultimate goal of sending the device to the International Space Station!” 

Then there’s the Gravity Loading Countermeasure Skinsuit project by Rachel Bellisle, a doctoral student in the Harvard-MIT Program in Health Sciences and Technology and a Draper Fellow. The Skinsuit is designed to replicate the effects of Earth gravity for use in exercise on future missions to the moon or to Mars, and to further attenuate microgravity-induced physiological effects in current ISS mission scenarios. The suit has a 10-plus-year history of development at MIT and internationally, with prior parabolic flight experiments. Skinsuit originated in the lab of Dava Newman, who now serves as Media Lab director.

“Designing, flying, and testing an actual prototype is the best way that I know of to prepare our suit designs for actual long-term spaceflight missions,” says Newman. “And flying in microgravity and partial gravity on the ZERO-G plane is a blast!” 

Alongside the Skinsuit are two more projects flown this spring that involve wearables and suit prototypes: the Peristaltic Suit developed by Media Lab researcher Irmandy Wicaksono and the Bio-Digital Wearables or Space Health Enhancement project by Media Lab researcher Pat Pataranutaporn. 

“Wearables have the potential to play a critical role in monitoring, supporting, and sustaining human life in space, lessening the need for human medical expert intervention,” Pataranutaporn says. “Also, having this microgravity experience after our SpaceCHI workshop … gave me so many ideas for thinking about other on-body systems that can augment humans in space — that I don’t think I would get from just reading a research paper.” 

AgriFuge, from Somayajulu Dhulipala and Manwei Chan (graduate students in MIT’s departments of Mechanical Engineering and AeroAstro, respectively), offers future astronauts a rotating plant habitat that provides simulated gravity as well as a controllable irrigation system. AgriFuge anticipates a future of long-duration missions where the crew will grow their own plants — to replenish oxygen and food, as well as for the psychological benefits of caring for plants. Two more cooking-related projects that flew this spring include H0TP0T, by Larissa Zhou from Harvard SEAS, and Gravity Proof, by Maggie Coblentz of the SEI — each of which help demonstrate a growing portfolio of practical “life in space” research being tested on these flights. 

The human touch 

In addition to the increasingly ambitious and sophisticated individual projects, an emerging theme in SEI’s microgravity endeavor is a focus on approaches to different aspects of life and culture in space — not only in relation to cooking, but also architecture, music, and art. 

Sanjana Sharma of the SEI flew her Fluid Expressions project this spring, which centers around the design of a memory capsule that functions as both a traveler’s painting kit for space and an embodied, material reminder of home. During the flight, she was able to produce three abstract watercolor paintings. “The most important part of this experience for me,” she says, “was the ability to develop a sense of what zero gravity actually feels like, as well as how the motions associated with painting differ during weightlessness.” 

Ekblaw has been mentoring two new architectural projects as part of the SEI’s portfolio, building on her own TESSERAE work for in-space self-assembly: Self Assembling Space Frames by SEI’s Che-Wei Wang and Reconfigurable space structures by Martin Nisser of MIT CSAIL. Wang envisions his project as a way to build private spaces in zero-gravity environments. “You could think of it like a pop-up tent for space,” he says. “The concept can potentially scale to much larger structures that self-assemble in space, outside space stations.” 

Onward and upward

Two projects that explore different notions of the search for life in space include Ø-scillation, a collaboration between several scientists at the MIT Kavli Institute, Media Lab, EAPS, and Harvard; and the Electronic Life-detection Instrument (ELI) by Chris Carr, former MIT EAPS researcher and current Georgia Tech faculty member, and Daniel Duzdevich, a postdoc at the Szostak Laboratory. 

The ELI project is a continuation of work within Zuber’s lab, and has been flown on previous flights. “Broadly, our goals are to build a low-mass life-detection instrument capable of detecting life as we know it — or as we don’t know it,” says Carr. During the 2021 flight, the researchers tested upgraded hardware that permits automatic real-time sub-nanometer gap control to improve the measurement fidelity of the system — with generally successful results. 

Microgravity Hybrid Extrusion, led by SEI’s mission integrator, Sean Auffinger, alongside Ekblaw, Nisser, Wang, and MIT Undergraduate Research Opportunities Program student Aiden Padilla, was tested on both flights this spring and works toward building in situ, large-scale space structures — it’s also one of the selected projects being flown on an ISS mission in December 2021. The SEI is also planning a prospective “Astronaut Interaction” mission on the ISS in 2022, where artifacts like Zenolith will have the chance to be manipulated by astronauts directly. 

This is a momentous fifth anniversary year for SEI. As these annual flights continue, and the experiments aboard them keep growing more advanced, researchers are setting their sights higher — toward designing and preparing for the future of interplanetary civilization. 

Read More

How enterprise MLOps supports scaling Data Science

Business meeting

For companies investing in data science, the stakes have never been so high. According to a recent survey from New Vantage Partners (NVP), 62 percent of firms have invested over $50 million in big data and AI, with 17 percent investing more than $500 million. Expectations are just as high as investment levels, with a  survey from Data IQ revealing that a quarter of companies expect data science to increase revenue by 11 percent or more.

Read More

Big tech tries to derail EU AI policy with ‘warnings’ from US think tank

Vampire tech

EU policymakers recently proposed a sweeping set of regulations called the Artificial Intelligence Act (AIA). If made law, the AIA would offer European citizens the strictest, most comprehensive protections against predatory AI systems on the planet. And big tech is terrified.

Read More

The Uploaded Brain

Uploaded brain

Is it possible to upload a human brain, and what could happen if we did?
This is a question I have been pondering a lot lately. I’m not trying to discredit anyone’s beliefs, but rather discuss the possibilities of such a thing. I’m not expecting an answer to this question, but rather hoping to have a discussion about it. I am simply discussing a topic that I find interesting and vaguely terrifying. I would like to begin by talking about what it means to “upload a human brain.”

Read More

Is the democratization of AI good?

Unisex sign and individual

In the modern age of education, almost anyone with an internet connection can learn anything they want to. This is also true for learning AI, and now, anyone with the requisite background has the opportunity to learn AI and build AI programs. When I say “democratization,” I mean the easy access to AI education and learning, and more importantly, the easy access to building scalable AI applications. In an article I wrote earlier this summer, I discussed my personal experience with AI ethics and how I paid little regard to the implications of my work.

Read More

New $35M AI Research Center at Indiana University created to grow AI Education

Luddy Center for Artificial Intelligence

The Luddy Center for Artificial Intelligence, which was unveiled June 23 and will open in August for the start of the fall semester, includes 58,000 square feet of space designed to enable multidisciplinary research in the constantly expanding AI field.

Read More

“Above the Trend Line” – Your Industry Rumor Central for 7/13/2021

Above the trendline

In this column, we present a variety of short time-critical news items grouped by category such as M&A activity, people movements, funding news, industry partnerships, customer wins, rumors and general scuttlebutt floating around the big data, data science and machine learning industries including behind-the-scenes anecdotes and curious buzz.

Read More

Solve your MLOps problems with an Open Source Data Science stack

Brick stack

Data scientists have challenges and need tools to overcome them. It’s best to use open-source, best-of-breed, modular solutions. It’s also a good idea to think about these challenges from a problem-solution perspective, as opposed to a “give me the awesomest tools” approach. I provide a list of common problems and OSS solutions for those problems, with comments on when they’re better/worse for that issue.

Read More

Including ModelOps in your AI strategy

ModelOps enterprise capability

Modern organized enterprises recognize that the adoption of a data-driven strategy is crucial to compete in an increasingly digitalized market. Data and analytics have become a very high priority, rising to the board level, which sees technologies such as Machine Learning and Artificial Intelligence as an opportunity to increase business capabilities, making processes more efficient and facilitating the spread of new business models.

Read More

How can businesses turn the tide on the AI diversity crisis?

Diversity graphic

For quite some time now, pressure has been mounting in the AI industry for tech companies and big conglomerates to wrestle control over its diversity crisis. From home assistants that can remind us to do chores and look up information on demand, to customer service chatbots that take care of queries and complaints, we are increasingly relying on technologies that use AI to assist in our daily lives. In the months and years to come, the reach of these technologies is likely to extend even further, and as such the conversation around their ethics has recently come to something of a crescendo.

Read More

How to solve Reproducibility in ML

Chances are that you’ve come across a machine learning paper and tried to replicate it, only to find that you get very different results. You tweak your code but keep getting it wrong. At this point, you’re doubting your skills as a data scientist, but don’t worry. I’m here to tell you that it’s completely okay, and it is not your fault! This is the classic case of the reproducibility challenge, a problem that isn’t unique to machine learning.

Read More

Follow the Money June 2021: 50 Funded Machine Learning Companies

Follow the Money June 2021: 50 Funded Machine Learning Companies

June 2021 latest funding covering artificial intelligence, machine learning, robotics, and innovation.

Read More

Top AI investment opportunities in 2021

Investment and growth

We are living in a digital era where artificial intelligence is reshaping our lives. AI is creeping into virtually every industry, from marketing to finance. It is boosting productivity, sales, customer service, product innovation, and operating efficiency. This evolution in AI in recent years has not been far from the headlines since it is impacting almost every sector and geography, and holding economic ramifications that could bring Industrial Revolution. According to the investment bank Oppenheimer, the next giant technology frontier is AI.

Read More

8 UK founders, leaders highlight fintech and deep tech as Bristol’s top sectors

Bristol startup ecosystem

The U.K. is gaining in popularity as a great place to start a tech firm. The country is quickly catching up to China on the tech investment front, with VC investments reaching a record of $15 billion in 2020, according to TechNation. Bristol proved especially popular among tech investors last year — local businesses raked in an impressive $414 million in 2020, making it the third-largest U.K. city for tech investment. The city also has the most fintech startups per head in the U.K. outside London, according to Whitecap’s 2019-2020 Ecosystem Report.

Read More

Proof of Concept to Production

Proof of Concept to Production

Proof of Concept (POC) is basically an experiment. It takes the form of a project, system, program, or product that isn’t 100% finished, but ready enough to try it on a real-world case.

In simple terms, Proof Of Concept is a demonstration to verify that your idea or theory can make it to the real world. It also demonstrates the service or product is cost effective and worthy enough to invest money and resources to develop it. In the majority of cases POC is used for effective research purposes and for investors to determine that a product/service has enough  potential to show that your product can be profitable. POCs are an important part of development, as it finds the gaps in workflow and ways to tackle it. Proof of Concept is like a demo version of the project, how systems can be implemented or throughput can be achieved through a given parameters. 

In this article, we’ll talk about:

What is Proof of Concept?

Chances are that your project has unique requirements, and it might be unclear if it’s even possible to turn it into reality. Proof of concept is a strategic way of testing those unique requirements to make sure that you’re not wasting your budget on an impossible product. 

So, a POC isn’t a production ready system, it’s more of a process to test if your system will actually work in production at all, and whether you should keep investing in it. 

POCs in AI and machine learning are developed and tested on simple algorithms and small volumes of data to see if it makes sense to develop them further. The Journey from POC to Production is not that straight forward. If the POC is successful, then the project moves on to production. Apart from telling you if a project is worth pursuing, POCs also help you answer other questions, like:

Is your workflow properly set up?What problems will you face in development?Does the functionality satisfy project requirements?

If the project idea is simple, most people will just assume that it’s feasible and skip the POC. This approach usually works out, but sometimes it might turn into a costly mistake. 

Read also
7 Tools to Build Proof-of-Concept Pipelines for Machine Learning Applications

Why Proof of Concept is important?

POCs illustrate the capabilities of your product/service, and inform you about how to properly plan the project. Mistakes in AI and ML are costly, and Proof Of Concept can be a good way to save money, showcase your plans to project stakeholders, and show if your product is trustworthy.

But that’s not all. Here are a few more reasons why POCs in AI are a good practice:

Minimal risk

POCs help you calculate risk at an early stage. Without investing lots of money and effort, you can test if your project is worth developing, and see exactly how risky it is.Let say you are planning to move your application to some platform, you are not sure if it will work or not. So POC will help you determine it’s working and can you move forward with it.

While working on POC, you will find many unexpected problems. With the problem in hand you can consider adding a solution before going to the production phase. POCs generate a lot of insights, either related to the predictive value of your product’s data, or related to some particular problem you’re facing. These insights can be helpful during and after the POC phase.

Improving workflow 

Test, enhance and Improve. Information gained during the POC stage helps companies in the long term, and creates an opportunity to improve the workflow or the model structure even after deploying to production. 

Save time and resources 

Let’s take a scenario, where POC finds issues in its initial stage. While working on POC, it gives us extra time to go back and solve the issue and run multiple tests before showcasing it. Proof of concept can change how things go for production. If your POC has good potential, you can tune and improve it in the deployment stage. Through a POC, you can show your stakeholders and investors that your product has solid potential to be profitable.

Created by author with Canva

Methodology for POC

There are a few broad steps that apply to any POC, regardless of the domain or type of software. Depending on your product requirements, you’ll need a good amount of time and resources to deploy to production. 

You’ll have to focus on the data modeling part to make models more accurate. When you get close to the production phase, things will get more complicated from a data perspective. 

Prove the requirements 

We need to define our requirements, or our clients’ needs, and determine the best way to satisfy them. Once you’re ready with the concept, you can map points to solutions. You can collect user/customer feedback for your concept, ask for things which could be improved from a customer point of view. In particular, you need to finish this stage with:

Problem Definition: Problem Definition is to define the requirements and analyze them. Sometimes POCs fail because they lack a clear problem definition. It fills the gaps between the current state, the desired state of the product. Data Collection & Preparation: Once everything is defined, it’s time to start preparing the data. Explore and experiment with datasets, choose decent datasets, see if they’re missing any crucial data points. Prepare data by sorting, structuring, processing, and adding missing data points. Monitor how the data generates the output. Once the data preparation stage is done, it’s time to develop and test.

Once you’ve done all the research for your product, it’s time to prototype your product and test it. You have to create a UI/UX for key features and develop a prototype product. Test it internally, or even with a special group of outside users. There are three key elements of prototyping:

Modelling: Add custom or pre-defined machine learning algorithms. You can try different machine learning experiments and create an ML model. Train your model over a set of data with an algorithm.  Collaboration: Efficient information exchange between teams makes work much easier. Testing: Once training is complete, it’s time to test your model with different data, including the data it has never seen before. The process allows data scientists to monitor how well the model worked, what needs improvement and what went wrong. By testing, you check the logical steps your algorithm has learned, and see if it matches with the product solution. 

Delivery 

Once your POC is up and running, you can create an MVP (Minimum Viable Product), and showcase the product to a larger segment of your user base. Until now, you’ve collected a ton of information about the product and its inner workings. Feedback, test results, prototyping, and more. So, now use this information to design a roadmap for deploying your product/service to the real world.

Validation: This is the final stage of POC where every information, result, and issue is presented to teams and stakeholders. You discuss the deployment roadmap, data collection, monitoring roadmap, and more.

Evaluating the POC

Once you’re done with the Proof of Concept, you need to evaluate the outcome of your product or service against the original goals and assumptions. If your POC met your earlier assumptions or even exceeded them, it means you’re on the right track.

Through the POC process, you might learn a few things about how to improve the product, as well as realize exactly what could go wrong in production. You might find out that some features are out of scope and need lots of improvement. 

Once you evaluate your POC positively, it’s time to scale your POC. To take your ML models to production, you’ll now have to work on some non-critical features as well. Also, you’ll have to monitor model performance on a larger customer segment, and larger infrastructure. 

Challenges faced in POC to production 

Things don’t always go the way you think. You will face many problems during and after the POC to production phase. Companies face lots of issues while moving from POC to production:

Data issues, Improper management, Inadequate ML tools and frameworks,Not enough expertise on the team.

Organization should have proper architecture to support your AI/ML integration. Organizations have to conduct large-scale research, develop multi-functional teams and test products with different hardware and software parameters.

Getting machine learning or AI into production takes a lot of patience, effort, and resources. It takes a good amount of research, a skilled team, hardware and software resources, and consultation from experts. Because of these challenges, a good and potential idea gets dumped in this phase. Below are a few top challenges faced by companies during the POC to production period.

Created by author with Canva

Management problems 

You might have the best model, but if your organization doesn’t understand its potential, it won’t make it to production. Sometimes organizations don’t provide access to the whole machine learning environment. If something goes wrong with a model or feature, there might be no one to take accountability. Machine learning and AI-based products/services are expensive, so sometimes organizations reduce the staff and resource requirements just to save money. Companies often take a step back as soon as they see the cost of running the POC. But with reduced staff and software requirements, this can cause delays and major issue

s at production level. Due to improper management, legal and various other issues in organizations, models are unable to survive and are discarded.

Technical problems 

Models don’t make it to production mainly because organizations don’t have enough knowledge on tools and best practices. Organizations often cut hardware and software resources to save money. This leads to many problems and eventually makes an impact on the POC. Organizations often make things complicated and fail to meet the goals they initially thought of, causing delays and challenges for the model to survive. 

Data problems 

There are many problems related to data, like data collection, quality, and volume. Data collection is where the team spends most of its time. Collecting the correct data and in required data format is a challenging part, data is available in different formats and files. If the data isn’t structured and cleaned properly, there could be many issues with the model. 

Data used in the proof of concept might have some gaps and issues with training datasets. You need well-formatted data for production. Review the metrics and fill the gaps in datasets.

While moving from POC to production, the data volume won’t be enough to detect any changes and train the models. You need a good amount of data to increase the prediction of your model. Lets say your model is about detecting some fruit, but the dataset used to train the model was taken in winter. Model accuracy might be correct, but it won’t be able to detect unripe fruit. The data isn’t big enough to provide accurate results. It’s best to use real-world data to get better results.

When you move from POC to production, data and its condition might change. The model starts to lose its predictive power. This impacts the performance of the model. This might happen because data collected during POC and during production had different collection methods. You might have to reconsider working on a few steps before going to production. 

Environment problems 

To properly manage and operate your models, you need a good infrastructure. It helps you to monitor and handle every process more easily. You need a proper environment where you can process and serve data. Having a data-ready model helps you cut the cost and complexity at the production phase. To run models in production, you need a proper management and monitoring system. You need to test, version, and monitor your models. 

Check also
👉 Model Registry – How to Have Your Model Development Under Control👉 Developing AI/ML Projects for Business – Best Practices

Created by author with Canva

Things to consider after POC

Your product/service might be up and running, but it’s not the end of your work. You’ll have to keep working on production architectures. You will fill the gaps in datasets, monitor the workflow, and update the system regularly. Once all your experiments are completed in the production stage, you might need to consider a few things. 

Program reusability

Working with a repeatable, reusable program for your data preparation and training phase makes things robust and easy to scale. Notebooks are generally not easily manageable, but it might be a good option to use Python files as it will increase quality of work. The key to developing a repeatable pipeline is to treat your machine learning environment as code. This way, your entire end-to-end pipeline can be executed upon significant events.

Data storage and monitoring

For the continuous process of your ML Experiment, you have to stay focused on data as well as the environment. If the input data changes, you may find out issues with the model accuracy. So monitoring the data, and implementing the data in correct format is also important, as sometimes the implemented data and the connected label might have changed.  

Governance

As you go from POC to production, the number of developers and data scientists working on the product/service need to increase, as it will help to distribute work and make delivery faster. Without proper governance at the production, many issues can arise. You’ll have to create a hub where every member of the team is connected and has access to necessary things. This makes things go really smooth and easy to maintain. Managing and organizing your machine learning experiment is a task in itself. 

May be useful
Check how you can organize your experiments.

Conclusion 

Proof of concept is an important part of deploying your idea into the real world. This process helps you evaluate your product requirements and the difficulties you might face in production. This helps you stay focused on modeling your idea. 

The POC gives you detailed insights about your product/service and ways to improve it. It helps you find out if a particular feature is good or not in real-world deployment. You can find problems and solve them before even going to production. It’s worth it!

Additional research and recommended reading

Harshil Patel

Android Developer and Machine Learning enthusiast. I have a passion for developing mobile applications, making innovative products, and helping users.

Follow me on

READ NEXT

MLOps: What It Is, Why it Matters, and How To Implement It (from a Data Scientist Perspective)

13 mins read | Prince Canuma | Posted January 14, 2021

According to techjury, we have produced 10x more data in 2020 compared to 2019. For data scientists like you and me, that is like early Christmas because there are so many theories/ideas to explore, experiment with, and many discoveries to be made and models to be developed. 

But if we want to be serious and actually have those models touch real-life business problems and real people, we have to deal with the essentials like:

acquiring & cleaning large amounts of data;setting up tracking and versioning for experiments and model training runs;setting up the deployment and monitoring pipelines for the models that do get to production. 

And we need to find a way to scale our ML operations to the needs of the business and/or users of our ML models.

There were similar issues in the past when we needed to scale conventional software systems so that more people can use them. DevOps’ solution was a set of practices for developing, testing, deploying, and operating large-scale software systems. With DevOps, development cycles became shorter, deployment velocity increased, and system releases became auditable and dependable.

That brings us to MLOps. It was born at the intersection of DevOps, Data Engineering, and Machine Learning, and it’s a similar concept to DevOps, but the execution is different. ML systems are experimental in nature and have more components that are significantly more complex to build and operate.

Let’s dig in!

Continue reading – >

Read More
1 2