Artificial intelligence (AI) is an innovation powerhouse. It autonomously learns on its own and evolves to meet simple and complex needs, from product recommendations to business predictions. As more people and services produce data, more powerful AI is necessary to process it all. AI chipsets that use edge computing are the solution.Read More
The International Space Station will soon get a delivery of powerful AI, edge and cloud computing tools from HPE and Microsoft Azure to expand technology experiments aimed at preparing NASA for launching future crewed exploratory missions to Mars. The new equipment and software, including HPE’s specialized, second-generation Spaceborne Computer-2 (SBC-2), will mark the first time that broad AI and edge computing capabilities will be available to researchers on the space station, Tom Keane, Microsoft’s vice president of Azure Global, wrote in a Feb. 11 post on the Azure blog.Read More
Thinking about picking up a technology because it’s well suited to your product requirements? You might be wrong.Read More
From voice and language driven AI to healthcare, cybersecurity and beyond, these are some of the key AI trends for 2021.Read More
What if companies could build more accurate ML models by training data across disparate clouds and multiple data platforms?
Amid market uncertainty and volatility, the ability for an enterprise to both scale and improve the performance of its machine learning (ML) models has become a vital endeavor. The trouble is that for many enterprises, the very architecture upon which their ML models run functions not as an ally but as an enemy of performance and availability.
Growing in tandem with the cloud-native architectures, the notion of integrating across disparate cloud platforms has certainly grown in recent years. This has been doubly true for the data, analytics, and AI enterprise community, as businesses work to make the most out of their existing technology investments in order to optimize operations in light of the current market uncertainties. Such integration, unfortunately, often entails investing in software that facilitates the use of on-premises, hybrid, edge, and multi-cloud deployments to most effectively and economically move and process data.
Stringing together this kind of loosely coupled architecture may indeed cut costs and engender a sense of agility and flexibility. But imagine moving ML model data at speed and at scale across this kind of landscape. Certainly, it can be done, but often asynchronously. With use cases like anomaly detection, the real-time accuracy of a model depends upon real-time access to live data. Take for example, organizations operating in highly regulated markets. For financial institutions, even a highly unified on-premises data architecture can stand in the way of progress, since financial regulations can limit the very mobility of data itself.
But what if companies could more rapidly improve the accuracy of their ML models by training data across multiple clouds or across multiple data sovereignty regions/networks, all without compromising data privacy or security?
First discussed in a paper published by Google in 2016, federated learning allows data scientists to architect a series of local data models that collaboratively learn from a shared model, while keeping all of the training data locally. Google went on to put this idea to work in 2017, equipping all Android mobile devices with a keyboard (Gboard) tool that could constantly improve type-ahead predictions, using data from all participating devices without having to actually move data from each device to the cloud. This notion provided Google (and Android users) with a number of important benefits:
More accurate and robust predictive modelling for personalized (per device) type-ahead recommendations
Improved data privacy, capitalizing on the data minimization principle, which is mentioned in General Data Protection Regulation (EU GDPR)
Increased performance with lower latency and less power consumption, which leads to an improved cost model
Unfortunately, even with this early success, federated learning has not yet found its way into mainstream enterprise solutions. But that is slowly changing, as vendors are looking for ways to productize this unique architecture. In 2019, Google launched TensorFlow Federated (TFF), an open source framework, which Google hopes will encourage experimentation among the broader industry. And it is likely that the company will seek to build a more operationalized or perhaps even productized application of this framework in the future.
But Google is not the only vendor investigating federated learning. Given IBM’s storied history as an early innovator across a number of potentially impactful but somewhat unpredictable ideas such as blockchain and quantum computing, it’s no surprise that the company would itself investigate the notion of federated learning.
Tucked away among an array of new features within the company’s recently updated Cloud Pak for Data (now at version 3.5), IBM slipped in a Tech Preview of federated learning as a package of Python algorithms. The basic idea is to equip companies with the ability to train and refine AI models without having to centralize the data supporting those models. But IBM doesn’t intend for its customers to put its implementation of federated learning to work on isolated use cases like type-ahead predictions. Rather, the company hopes enterprises will see federated learning as an opportunity to drive better AI outcomes through better model training without being impeded by data silos and enjoin collective parties to participate without having to worry about the privacy, compliance, security, or performance concerns mentioned above.
IBM’s approach has three important and interrelated consequences. First, it allows data scientists and AI developers to make use of a lot of data without having to maintain stable, performant data pipelines between the source and the model. Second, it enables data scientists to make use of source data without having to worry about creating and maintaining data privacy and security. Third, enterprises can save costs and risks of maintaining model training performance, obviating the need to re-architect or move data.
Does this mean IBM customers can simply port existing projects to a federated learning environment? Certainly not. Given the many complexities of federated learning itself, companies should carefully evaluate the value of adapting existing projects to leverage this new capability. For example, customers must carefully map out how they wish to share data, tune hyper-parameters, architect privacy models, etc. in order to most effectively utilize federated learning. Furthermore, because federated learning is itself a very immature technology, there are many unanswered questions that must be addressed by all community players such as how to overcome challenges in data heterogeneity, client model bias, inference attack prevention, etc.
The best strategy will include some joint experimentation between customers and technology partners that are looking to build bespoke implementations for specific business outcomes. To that end, IBM has a great start, as there are already 50 systems integrator and technology partners developing industry-specific content that’s sold as a part of Cloud Pak for Data. Even horizontal database, data management, and data science partners like Anaconda, MongoDB, and NetApp, respectively, may find new routes to market in directly supporting verticalized implementations.
Regardless of the route to market for federated learning, these early efforts from market leaders Google and IBM bode well for enterprise AI practitioners. While it is unlikely that federated learning will ever show up as a one-click, deployable service option, the fact that it’s now available as both a framework (from Google) and a set of libraries (from IBM) prove that there is a light at the end of the tunnel for companies struggling to solve tough architectural AI challenges.
Note: this post was taken from a post that originally appeared within the Omdia VisionAIres community.
Gaurav Shah2 hours ago·3 min readThe global data monetization market size is expected to reach USD 7.34 billion by 2027, registering a CAGR of 24.1% from 2020 to 2027, according to a study conducted by Grand View Research, Inc.Read More
In 2021, corporate big data leaders will be looking to improve data quality and turnaround of big data projects, as well as performance in meeting business objectives. While 2020 hasn’t been a normal year for anyone, you still have to plan for the future and get ready for what may come. Here are seven key big data areas of focus for 2021.Read More
Of all the things that this pandemic has taught us, one of the primary concern that many people are still facing is the uncertainty in planning their travel – either in case of emergency or for leisure. And this resulted in TravoBOT – A chatbot that helps the user by collating information from different data sources and provide user with a travel recommendation to a particular destination.Read More
The United States Government [will] sustain and enhance the scientific, technological, and economic leadership position of the United States in AI R&D and deployment through a coordinated Federal Government strategy … [that includes] better enabling the use of cloud computing resources for federally funded AI R&D.Read More
The unpredictable events of 2020 have nudged many budding startups & large enterprises to avail cloud services to ensure business continuity. Let’s look at 5 cloud computing trends that will witness improvements in 2021 for enterprises and consumers.Read More
During the last few decades, Big Data has become an insightful idea in all the significant technical terms. Additionally, the accessibility of wireless connections and different advances have facilitated the analysis of large data sets.Read More
Natural Language Processing (NLP) is a large area of research with many relevant applications for businesses. Being able to take in arbitrary text and extracting sentiment, performing translation, auto-suggest/correct are some typical use cases seen. But the applications are of course endless.Read More
6 steps for the creation of a customer model workflow for A2I using Angular.Read More
In this article today, we will get an understanding of how a Service Mesh is critical towards the implementation of a Microservice Architecture, and how Istio solves the purpose of achieving those.Read More
Artificial Intelligence has been seen as a blessing by the fintech players and payment providers that would help customers in online purchases during social distancing, and also facilitating people in not leaving their houses.Read More
An expanding number of challenger banks have eschewed the industry’s traditionally slow and steady innovation approach.
These banks are offering digital or mobile-only platforms powered by cutting-edge technologies like artificial intelligence (AI), biometrics and machine learning (ML) to fulfill users’ needs with greater speed and a little more flair.