Kenshoo, a provider of a platform for managing marketing campaigns, yesterday announced its intent to acquire Signals Analytics, which provides a service for collecting unstructured consumer data that allows marketers to model campaigns using AI technologies.Read More
Synced has selected 10 AI-related podcasts for readers to check out over the holiday season.Read More
Dubber Corporation Ltd., a provider of unified call recording and analytics software based in Australia, revealed that it’s acquiring Speik, a U.K.-based maker of call recording software that complies with Payment Card Industry (PCI) security mandates. Valued at $38 million, the deal is part of Dubber’s effort to create a widely available cloud-based voice recording service infused with machine learning algorithms.Read More
Face recognition, or facial recognition, is one of the largest areas of research within computer vision. We can now use face recognition to unlock our mobile phones, verify identification at security gates, and in some countries, make purchases. With the ability to make numerous processes more efficient, many companies invest into the research and development of facial recognition technology. This article will highlight some of that research and introduce five machine learning papers on face recognition.Read More
AI chipmaker Horizon Robotics is seeking to raise $700 million in a new funding round. Horizon is often seen as potentially becoming China’s equivalent of NVIDIA. The company is founded by Dr Kai Yu, a prominent industry figure with quite the credentials. Yu led Baidu’s AI Research lab for three years, founded the Baidu Institute…Read More
Enterprise datacenters were traditionally manually operated and maintained, demanding both constantly available personnel and persistent vigilance for potential maintenance needs. But AI advancements are changing that…Read More
These are interesting times indeed! Almost every other technology-driven company is barging into Artificial Intelligence (AI) and Machine Learning (ML). Yes, even a layman contains a tremendous curiosity to know about AI technology.
AI technology could potentially disrupt everything that we do in our daily lives. It is that exceptional and brilliant. Integration of AI & ML into the business ecosystem would give numerous transformation opportunities to leverage the value chain.
Accenture states that AI can increase business productivity by a staggering rate, up to 40%.
Then why is it the case that businesses find it extremely difficult to market their AI & ML solutions? A lot of people are enthusiastic about adopting AI. However, there are a lot more businesses who are still biting their nails or scratching their heads when it comes to adoption of AI. It is a fact. Don’t you believe it? Well, numbers don’t lie!
According to the Deloitte report, 94% of the enterprises face potential Artificial Intelligence problems while implementing it.
AI technology definitely sounds business-friendly, hands down and accepted. On the flipside, it is also imperative for technology companies to know about the problems while marketing AI driven products or solutions.
This article will help our readers to identify and understand the challenges faced by the AI development companies to market the AI & ML products followed by the needs to overcome them.
7 Challenges to Market AI & ML Solutions
1. If AI is a Ferrari, Data is its Fuel
Machine Learning, a subset of Artificial Intelligence, requires not only data, but labeled data. It means data which provides an answer to a variety of inputs, also known as parameters, to derive the future or prediction as an output. Such prediction parameters are not possible without data. As a consequence, the decision making of the businesses becomes data centric. The BIG Question is, how much companies do have a wide variety of labeled data?
It is understood that leveraging Big Data, many have collected data, but what about labeled data? We are not talking about creating quantitative data where output is derived merely from that data itself. What to make of an image? Let’s say an image of a dog. Nothing is intrinsic to a group of pixels to tell an algorithm that that group represents a dog. To make it work, it requires the programmer to train an algorithm to label the dog at first.
Having enormous amounts of data does not guarantee the desired output or insights. Data in AI is indeed a huge factor for developing a functional, feasible, and reliable AI and Machine Learning system. The real problems lie in various industries where there are some businesses who do not have sufficient data to start. And then there are also those who have decades long unusable data.
2. A Versatile Solution is a Tough Ask
A technology product has to be versatile. By this, it means that a generic AI product, if launched to the market, must be able to solve or upgrade the business operations related to multiple industries.
It is highly impossible to achieve this. Why? Because the Machine Learning algorithms are trained by humans using specific labeled data to provide precise predictions.
To make it crystal clear, let us briefly discuss an example. Think of an AI & ML based Healthcare solution which uses large datasets of individual patients. An algorithm is trained with precise and detailed labeled data to deliver the precision parameters to make predictions. This can derive advanced output on health risk stratification, disease progression, the probable effects of various interventions on specific patients, and so on.
Now think if it is possible to make this whole AI-based healthcare solution compatible with the automotive industry, of course with some tweaks and tricks. It’s a BIG No. The algorithms will need to be trained again. The hardware will also get involved if a business wants to derive the data about the driver behavior, predictive maintenance, engine health, etc.
This is where the cost factor comes in, forget about an AI product working for multiple industries. However, Generalized AI is the solution to this lack of versatility problem where it can perform multiple tasks, just as a human could do. Nevertheless, these are still early days to talk about Generalized AI as it is yet to come in the future.
3. AI Adoption is a ‘Costly Affair’
Adoption of AI in business is certainly a costly affair. Startups and small businesses struggle a lot to welcome this technology. They do not separate funds available like the tech giants to implement AI into business operations.
Let us see a couple of facts which portray how AI & ML in business could become a costly affair.
Budget Vs Precision Trade-Off
The results of AI & ML integration in terms of business transformation heavily depends on the budget allocated for the technology itself. The more the budget, more the types of data could be aggregated using high-end technologies and sensors giving real-time data. As a result, precision parameters could be fetched for every single event to provide a more accurate decision making.
Expensive AI & ML Resources
Implementation of AI solutions in existing business systems requires experienced and expensive data scientists, data engineers, and specific subject matter experts based on the industry. Businesses with a tight budget find it immensely difficult to bring in the resources who help them do it. Non-tech businesses often misunderstand project cost with a full and final solution that is implemented.
Recurring Cost of Data Storage Solutions
Accounted as the critical factor that affects the overall cost in AI adoption. AI works at its best to provide expected forecasts when the sensors can feed enough input data. The more the input data, the prediction parameters will be more accurate. The mountain of quality data that needs to be stored and processed will raise the data storage costs and overall IT infrastructure costs too. There may be a few raised eyebrows towards maintaining an economically sustainable business model due to this.
4. Supply and Demand Issue of Computational Needs
Expensive GPUs (Graphic Processing Unit) are used to accelerate CPUs for large scale data processing. Even big companies do not have it due to lack of supply of the GPUs.
Then why not CPUs? Because to perform machine learning or deep learning on CPUs takes much longer to train a model than on GPUs, normally days or weeks too. Data processing could result in a race against time scenario. It is not like a conventional software development process when you run a program in a matter of minutes.
And let’s say even if the model is ready and if a new set of data is acquired by the sensors which you want the AI system to incorporate. What will happen? It may again take days or even a week. To conclude it, the model cannot stay updated as the rate of data acquisition is much faster.
5. Lack of Trust
“AI is the technology for the rich!”- The majority of businesses believe this as only the ones with the specific funds available to implement it, own it. There is one big reason that makes businesses reluctant is the lack of enough AI use cases in the market. Due to this, small or medium sized businesses and even non-tech enterprises do not show passion to go for it. Sensing a sheer case of trust issues and lack of support here!
Disastrous Example of AI-powered Autonomous Car
Bad quality data could lead to terrible real accidents which mark a lasting dent on the credibility of technologies. A similar thing happened on that dark night of March 19, 2019. When the Uber’s self-driving car, with a safety driver behind its wheel, was running in the ‘much hyped’ autonomous mode hit and killed a woman. The shocking thing came later when telemetry reports suggested that the trained algorithm classified that woman as an ‘unknown object’ at first followed by ‘vehicle’ and at last ‘a bicycle’. This confusion delayed the decision making to activate the braking system and could not prevent the tragedy.
In the midst of the COVID-19 crisis and even beyond that, the businesses would be hesitant to invest in a technology like AI. Not only because they have their doubts, but also due to lack of knowledge on how a machine takes automated decisions and whether they would be accurate or not. AI development companies must think about pitching a provable solution. Perhaps an Explainable AI could do that job.
6. Legal Challenges : The Blame Game
Bad quality of data (having noise) in AI solution development could result in a disastrous output, we just saw that in the example above about the AI in studying virus patterns for a vaccine solution. That could cause some serious legal problems for the company.
Example of Banking & Finance Industry
Imagine an algorithm trained for financial institutions and banking where customers’ Personal Identification and financial details are fed as data into the algorithms. Now wonder what would happen if that slips into the hands of notorious hackers? The company obviously would fall into a web of legal actions.
Who is the one that should be blamed for all of this? The company, the hackers, or the software programmers who developed a dodgy AI system? The debate on this can go on and on and hence the community has come up with a responsibility assignment to overcome this.
What is needed to see off these Challenges?
A way to overcome these challenges to a certain extent is to demonstrate proof points to the businesses showcasing how AI and ML technologies are real and are exactly like what we imagine them to be. It is a wide belief that AI is the technology which can change lives. However, the problem is that the machine does not understand what humans want or what is acceptable to us. It will do what it is trained to do. It would be great if it is possible to come up with a mutual agreement between the man and the machines. But if that does not happen, it could easily turn into a nightmare.
Finally, let us have a look at the recommendations on what possibly could be done to overcome the pitfalls we have seen so far.
Explainable AI to Educate Businesses
It is far more understood now that businesses need to understand how Artificial Intelligence draws its outputs. This is to instill a sense of transparency, a sense of trust between the Man Vs Machine theory. Still there are many who believe that AI & ML models are vague and not comprehensive enough when it comes to the explanation of making decisions and forecasts based on the data. The only answer to this is the Explainable AI concept.
Explainable AI is the latest subset of AI technology which is created to explain how such decisions are made by the AI systems. It can provide perhaps the deepest insights into the data, variables, and prediction parameters to come to a conclusion for creating an action item or making a decision.
As a result, it will be possible to define what kind of exact data is needed. And when even the model itself can explain reasons for a certain decision, your team of data scientists will look for other models to get the desired set of results to ensure that our Ethics and AI must go hand-in-hand.
Transparency through Education
The overall business ecosystem needs to accept the technology and institutional awareness is needed. It is the responsibility of the AI community and AI solutions providers to educate the businesses about the use of this life-transforming technology to embrace it with full transparency to develop the trust factor.
The establishment of the next wave of innovation via Artificial Intelligence and Machine Learning to the ground level is only possible once this awareness is implemented in the market.
Without understanding the possibilities of the solutions via AI, how it works, what kind of data is needed to feed those models based on algorithms, the businesses regardless of their sizes will struggle for sure.
AI development companies must identify the ways of educating their existing clients or new target market. One of these ways is expert counselling.
Encourage Businesses to Revise Business Models
AI application providers or marketers must be able to convince the vendors to look at the bigger picture. They need to understand the nature of the industry and business, their operations, objectives, and decision making. It would be the responsibility of the AI solution developers to come up with detailed documentation and presentation for the a few points briefly showcased below:
Best solutions which could be proposed using AI & ML.
Type of data needed to run the suitable AI models and ways of getting it.
The process of creating a set of prediction parameters for a desired output.
Identifying and matching criterias, whether the goal is cutting overall costs, new revenue generation streams or any other.
Identifying AI resources to build a team to develop and implement the AI solutions.
A detailed cost estimation with a strategic roadmap.
Develop AI Strategy and Roadmap
Any goal-oriented organisation always looks toward upscaling the revenues and that is why it is important for AI solution providers to understand the challenges and the solution they are going to pitch.
Being technical with AI projects is simply not enough, as you need to be asking the right questions to get the specific, measurable, and achievable set of data. Keep transparency intact with what is possible and what is not. You can effectively come up with a strategic roadmap which helps the businesses understand the long term benefits and gain competitive advantage via concrete feasible quick wins in the market.
Motivate MVPs / PoCs for the Start
Considering that personalised documentations, presentations including demos and results are shown to the vendors, AI developers could encourage them to go for a minimum viable product or a pilot phase project (known as PoC).
Normally it takes 2-3 months for an MVP to develop at affordable costs. It will increase the chances of a successful AI project as it allows the dev team to monitor, test, and identify changes needed to develop a more stable AI system that meets the technological and organisational needs.
MVP helps in evaluating the technical and commercial feasibility in an agile way. A successful MVP can be scaled quickly to a pilot phased project for speed to scale up.
Change of Perception
You can never change your life until you step out of your comfort zone. No matter how much one could try, it finally depends on the individuals whether they opt to make that “Bold” move. Organizations of the 21st century must do the same and think out of the box. They must put considerable and collaborative efforts in changing the way they see.
For vendors to put their trust in AI will require a significant effort from the tech companies to operate in full transparency. A close collaboration is also needed across scientific disciplines, industries and government to encourage businesses in promoting AI and how it could help them and the consumers.
Marketing and implementing AI into businesses is just not easy, but at the same time, it is not just impossible. All it needs is time as there are tech giants like IBM, Google, Apple, Microsoft who are pretty optimistic and assured of what AI can do. They are putting their blood and sweat in bringing solutions to these AI adoption challenges and encouraging AI driven startups and companies for a collaborative approach.
On the other hand, one must understand that Artificial Intelligence technology is not really a reason for war between ‘Man Vs Machine’. As a matter of fact, AI is all about the ‘Man and the Machine’.
AI adoption in the market is scattered and relatively low at the moment due to reasons given in this article. But things will improve a lot with the invention of explainable AI, which is expected to show its results over the next five years.
Are you a business owner and wish to give it a try? Then you must consider the requirements of a successful AI & ML implementation, which are:
Identify the business problem
Get the right AI dev team to solve it
Ask the right questions to know about the technology
Work closely with them to provide tools
Perform goal-centric monitoring and evaluation
Share this story
Join Hacker Noon
Create your free account to unlock your custom reading experience.
VergeSense, a company that develops hardware and software to capture key data from across the physical workplace, has raised $12 million in a series B round of funding. The raise comes as a slew of smart building sensor platforms capitalize on businesses seeking to bring employees back to the office, after having been forced to embrace remote working for much of 2020.Read More
In 2021, corporate big data leaders will be looking to improve data quality and turnaround of big data projects, as well as performance in meeting business objectives. While 2020 hasn’t been a normal year for anyone, you still have to plan for the future and get ready for what may come. Here are seven key big data areas of focus for 2021.Read More
Pre-pandemic, artificial intelligence was already poised for huge growth in 2020. Back in September 2019, IDC predicted that spending on AI technologies would grow more than two and a half times to $97.9 billion by 2023. Since then, COVID-19 has only increased the potential value of AI to the enterprise. According to McKinsey’s State of AI survey published in November 2020, half of respondents say their organizations have adopted AI in at least one function.Read More
Artificial intelligence is classifying real supernova explosions without the traditional use of spectra. By training a machine learning model to categorize supernovae based on visible characteristics, the astronomers are able to classify real data from the Pan-STARRS1 (Panoramic Survey Telescope and Rapid Response System) Medium Deep Survey for 2,315 supernovae with an accuracy rate of 82% without the use of spectra.Read More
A time series comprises four major components. A trend. A seasonal component. A cyclic component. And a stochastic/ random component. All these components may or may not be present in a time series. Therefore, before estimating these components, we need to first check for their existence. If they are present then we can move forward with their estimation. This article explains the Relative Order Test for testing the existence of a trend.Read More
Only four Western and two Chinese AI companies report income, and all have big losses. CrowdStrike and c3.ai both did IPOs and had losses equal to 30% and 40% of revenues respectively in 2019, and 13% and 40% respectively in 2020. Nest’s losses were 85% of revenues in 2017 and DeepMind’s losses were four and 1.7 times its revenues in 2018 and 2019 respectively causing Google to write off $1.3 Billion in debts.Read More
Now PyTorch is capable of handling a full pipeline in deep learning and AI projects, but some of the things can be pretty messy like using PyTorch for Forecasting, so a third party is introduced by Jan Beitner Pytorch Forecasting”Read More