Salesforce announces new Mulesoft RPA tool based on Servicetrace acquisition

Salesforce RPA tool

When Salesforce announced it was buying German RPA vendor Servicetrace last month, it seemed that it might match up well with Mulesoft, the company the CRM giant bought in 2018 for $6.5 billion. Mulesoft, among other things, helps customers build APIs to legacy systems, while Servicetrace provides a way to add automation to legacy systems.

Read More

Tech behind Water Resource Management startup Cranberry Analytics

Cranberry Analytics

In a conversation with Analytics India Magazine, Co-founder and CTO Shishir explained the technology behind Cranberry Analytics’ management system Recon, breaking down how AI and ML can facilitate the management of natural resources and what the future for AI in water management looks like. 

Read More detect and solve quality issues in AI systems

Raymon AI helps teams detect and solve quality issues in AI systems. As such, it is a monitoring and observability platform for AI-based systems. Monitoring tools help engineers find out when something is wrong, observability tools help them find out why. Basically, observability platforms offer engineers more insight in their systems, which is crucial for being able to maintain them in a cost-effective manner. In this blog post we dig a bit deeper into why exactly observability tooling is crucial for successful adoption of AI-based systems.

Read More

Introduction to Voice User Interfaces (Part - 2)

Voice user interface

One of the most popular application areas for voice system today is conversational AI. Graph based interaction mainly focuses on asking pointed questions in a prescribed order and only accepting specific terms as responses. We’ve seen this plenty of times before when we can’t move forward in the system until we provide our user ID or we can’t specify our destination until we provided the location we’ll be starting from.

Read More

Introduction to Voice User Interfaces (Part - 1)

Voice user interface

VUI system’s overview and introduction to some current VUI applications.Prateek SawhneyJust now·7 min readHello and welcome to this medium article around voice user interfaces. A VUI is a speech platform that enables humans to communicate with machines by voice. VUIs used to be the stuff of science fiction. Movies and TV shows featuring spaceship crews that communicated verbally with their computers seemed fantastic. But that fantastic future is here now. Voice enabled agents are becoming common place on our phones, computers, cars to the point that many people may no longer think of these systems as artificial intelligence at all. Under the hood, though, there is a lot going on. Audio sound waves from voice must be converted into language texts using machine learning algorithms and probabilistic models.Photo by Magnus Jonasson on UnsplashThe resulting text must be reasoned-over using AI logic to determine the meaning and formulate a response. Finally, the response text must be converted back into understandable speech again with machine learning tools.These three parts constitute a general pipeline for building an end to end voice enabled application. Each part employs some aspect of a AI. And that’s why we’re here.In this article we’ll go through a VUI system’s overview and talk about some current VUI applications. We’ll focus on conversational AI applications where we’ll learn some VUI best practices and why we need to think differently about user design for voice as compared to other interface mediums. Finally, we will put these ideas into practice by building our own conversational AI application.VUI OverviewLet’s take a closer look at the basic VUI pipeline we described earlier. To recap, three general pieces were identified.Voice to text,Text input reasoned to text output,And finally, text to speech.Speech RecognitionIt starts with voice to text. This is speech recognition. Speech recognition is historically hard for machines but easy for people and is an important goal of AI. As a person speaks into a microphone, sound vibrations are converted to an audio signal. This signal can be sampled at some rate and those samples converted into vectors of component frequencies. These vectors represent features of sound in a data set, so this step can be thought of as feature extraction.Photo by Jonas Leupe on UnsplashThe next step in speech recognition is to decode or recognize the series of vectors as a word or sentence. In order to do that, we need probabilistic models that work well with time series data for the sound patterns. This is the acoustic model.Decoding the vectors with an acoustic model will give us a best guess as to what the words are. This might not be enough though, some sequences of words are much more likely than others. For example, depending on how the phrase “hello world” was said, the acoustic model might not be sure if the words are “hello world” or “how a word” or something else.Now you and I know that it was most likely the first choice, “hello world”. But why do we know? We know because we have a language model in our heads, trained from years of experience and that is something we need to add to our decoder. An accent model may be needed for the same reason. If these models are well trained on lots of representative examples, we have a higher probability of producing the correct text. That’s a lot of models to train. Acoustic, language and accent models are all needed for a robust system and we haven’t even gone through the whole VUI pipeline yet.Reasoning LogicBack to the pipeline, once we have our speech in the form of text, it’s time to do the thinking part of our voice application, the reasoning logic.If I ask you, a human, a question like how’s the weather?You may respond in many ways like “I don’t know?” “It’s cold outside”, “The thermometer says 90 degrees, etc”. In order to come up with a response, you first had to understand what I was asking for and then process the requests and formulate a response. This was easy because, you’re human. It’s hard for a computer to understand what we want and what we mean when we speak. The field of natural language of processing (NLP) is devoted to this quest. To fully implement NLP, large datasets of language must be processed and there are a great deal of challenges to overcome. But let’s look at a smaller problem, like getting just a weather report from VUI device.Photo by Thomas Kolnowski on UnsplashLet’s imagine an application that has weather information available in response to some text request. Rather than parsing all the words, we could take a shortcut and just map the most probable request phrases for the weather to get weather process. In that case, the application would in fact understand requests most of the time. This won’t work if the request hasn’t been premapped as a possible choice, but it can be quite effective for limited applications and can be improved over time.TTS (Text To Speech)Once we have a text response, the remaining task in our VUI pipeline is to convert that text to speech. This is the speech synthesis or text to speech (TTS). Here again examples of how words are spoken can be used to train a model, to provide the most probable pronunciation components of spoken words. The complexity of the task can vary greatly when we move from say, a monotonic robotic voice to a rich human sounding voice that includes inflection and warmth. Some of the most realistic sounding machine voices to ate have been produced using deep learning techniques.VUI ApplicationsVUI applications are becoming more and more common place. There are a few reasons driving this. First of all voice is natural for humans. It’s effortless for us to converse by voice compared to reading and typing. And secondly, it turns out it’s also fast. Speaking into a text transcriber is three times faster than typing. In addition there are times when it is just too distracting to look at a visual interface like when you’re walking or driving. With the advent of better and more accessible speech recognition and speech synthesis technologies a number of applications have flourished. For example voice interfaces can be found in cars, drivers can initiate and answer phone calls given receive navigation commands and even receive texts and e-mail without ever taking their eyes off the road. Other applications in web and mobile have been around for a few years now but are getting better and better. Dictation applications, Leverage speech recognition technologies to make putting thoughts into words a snap. Translation applications, Leverage speech recognition and speech synthesis as well as some reasoning logic in between to convert speech in one language to speech in another. If you’ve tried any of these you know it’s not quite a universal translator but it’s pretty amazing to be able to communicate through one of these apps with someone you couldn’t even speak to before.One of the most exciting innovations in VUI today is conversational AI technology. We can now carry on a conversation with a cloud based system that incorporates well-tuned speech recognition, some functionality and speech synthesis into one system or device. Examples include Apple’s Siri, Microsoft’s Cortana, Google home and Amazon’s Alexa on Eco. Conversational AI really captures our imaginations because it seems to be an early step toward the more general AI we’ve seen in science fiction movies.The Home Assistant devices in this category are quite flexible. In addition to running a search or giving you the weather these devices can interface with other devices on the internet linked with your accounts if you want, fetching save data, the list goes on. Even better, development with these technologies is accessible to all of us. We really only need our computer to get started creating our own application in conversational AI. The heavy lifting of speech recognition and speech synthesis have been done for us and turned into a cloud based APIs. The field is new and just waiting for smart developers to imagine and implement the next big thing. There’s a lot of opportunity out there to come up with any voice and able application we can think of.ReferencesIntroduction to Stemming and Lemmatization (NLP)Introduction to Stemming vs Lemmatization (NLP)A complete study on Stemming vs Lemmatization and which technique is used under different Natural Language Processing…medium.com2. Introduction to Word Embeddings (NLP)Introduction to Word Embeddings (NLP)A complete study about capturing the contextual meanings of neighbouring words using techniques like Word2Vec & GloVe.medium.comThat’s it for Voice User Interfaces. Thanks for reading and following along. I hope you had a good time reading and learning. Bundle of thanks for reading it!My Portfolio and Linkedin :)Android Apps by Prateek Sawhney on Google PlayArtificial Intelligence Engineer @ Digital Product School by UnternehmerTUM & Technical University of Munich, — OverviewShare Split app enables quick and easy file transfer without internet usage. Share Split app created by Prateek Sawhney…

Read More

Artificial Intelligence in Film Industry is Sophisticating Production

AI film production

Artificial intelligence in filmmaking might sound futuristic, but we have reached this place. Technology is already making a significant impact on film production. Today, most of the outperforming movies that come under the visual effects category are using machine learning and AI for filmmaking. Significant pictures like ‘The Irishman’ and ‘Avengers: Endgame’ are no different.

Read More

Introduction to NLP Deep Learning theories

Open books

In this post, I will summarize what I learnt from Natural Language Processing with Deep Learning offered by Stanford University, including the Winter 2017 video lectures, and the Winter 2019 lecture series. Both lectures were taught by Prof. Christopher Manning at Stanford University. Several topics on deep learning NLP theories which are covered in the two lectures include…

Read More

Leading MLOps Tools are the next frontier of Scaling AI in the Enterprise

Machine Learning

Machine Learning Operations (MLOps) is on the rise as a critical technology to help to scale machine learning in the enterprise. According to McKinsey, by 2030, ML could add up to 13 trillion dollars back into the global economy by enabling workers in all sectors to improve their output. Furthermore, MarketWatch indicates that, in 2021, the global MLOps market size will be USD million and it is expected to reach USD million by the end of 2027, with a CAGR during 2021-2027.

Read More

Large language models aren’t always more complex


Language models such as OpenAI’s GPT-3, which leverage AI techniques and large amounts of data to learn skills like writing text, have received an increasing amount of attention from the enterprise in recent years. From a qualitative standpoint, the results are good — GPT-3 and models inspired by it can write emails, summarize text, and even generate code for deep learning in Python. But some experts are skeptical that the size of these models — and their training datasets — correspond to performance.

Read More

Is AI the best solution for Crowd Management?

Crowd management

Machines can read and learn from different types of data and then perform real-world tasks. AI is also being used to simplify and improve how humans control crowds and populations worldwide. Finnish authorities are experimenting with AI to monitor crowds in Helsinki and Andhra Pradesh. While AI can augment, automate, and improve decision-making and planning, it’s only a part of the equation. Security basics are still necessary when it comes to policing and monitoring crowds, says Todoos Crowd Control Expert.

Read More

How conversational AI is the perfect lead generation tool?

Conversational AI

Conversational AI (AI) refers to technologies, like chatbots or virtual agents, which users can ask. They use large volumes of knowledge, machine learning, and tongue processing to assist imitate human interactions, recognizing speech and text inputs, and translating their meanings across various languages. Now that we know what conversational AI is, the basic question of how conversational AI can help to generate leads arises.

Read More

Artificial General Attention Economics

Attention economics

While you are reading this article, you are definitely paying “attention” to the text right in front of you which is coming from a latent attentive behavior triggered by a need for a specific type of information. Your eyes are focusing on certain text in front of you and you are making cognitive decipherments of the same which is triggering some neurons to fire and register that in your memory.

Read More

What is Feature Engineering — Importance, tools and techniques for Machine Learning

Feature engineering

Lets see a few feature engineering best techniques that you can use. Some of the techniques listed may work better with certain algorithms or datasets, while others may be useful in all situations.1.ImputationWhen it comes to preparing your data for machine learning, missing values are one of the most typical issues. Human errors, data flow interruptions, privacy concerns, and other factors could all contribute to missing values. Missing values have an impact on the performance of machine learning models for whatever cause. The main goal of imputation is to handle these missing values. There are two types of imputation :Numerical Imputation: To figure out what numbers should be assigned to people currently in the population, we usually use data from completed surveys or censuses. These data sets can include information about how many people eat different types of food, whether they live in a city or country with a cold climate, and how much they earn every year. That is why numerical imputation is used to fill gaps in surveys or censuses when certain pieces of information are missing.#Filling all missing values with 0data = data.fillna(0)Categorical Imputation: When dealing with categorical columns, replacing missing values with the highest value in the column is a smart solution. However, if you believe the values in the column are evenly distributed and there is no dominating value, imputing a category like “Other” would be a better choice, as your imputation is more likely to converge to a random selection in this scenario.#Max fill function for categorical columnsdata[‘column_name’].fillna(data[‘column_name’].value_counts().idxmax(), inplace=True)2.Handling OutliersOutlier handling is a technique for removing outliers from a dataset. This method can be used on a variety of scales to produce a more accurate data representation. This has an impact on the model’s performance. Depending on the model, the effect could be large or minimal; for example, linear regression is particularly susceptible to outliers. This procedure should be completed prior to model training. The various methods of handling outliers include:Removal: Outlier-containing entries are deleted from the distribution. However, if there are outliers across numerous variables, this strategy may result in a big chunk of the datasheet being missed.Replacing values: Alternatively, the outliers could be handled as missing values and replaced with suitable imputation.Capping: Using an arbitrary value or a value from a variable distribution to replace the maximum and minimum values.Discretization : Discretization is the process of converting continuous variables, models, and functions into discrete ones. This is accomplished by constructing a series of continuous intervals (or bins) that span the range of our desired variable/model/function.3.Log TransformLog Transform is the most used technique among data scientists. It’s mostly used to turn a skewed distribution into a normal or less-skewed distribution. We take the log of the values in a column and utilise those values as the column in this transform. It is used to handle confusing data, and the data becomes more approximative to normal applications.//Log Exampledf[log_price] = np.log(df[‘Price’])4.One-hot encodingA one-hot encoding is a type of encoding in which an element of a finite set is represented by the index in that set, where only one element has its index set to “1” and all other elements are assigned indices within the range [0, n-1]. In contrast to binary encoding schemes, where each bit can represent 2 values (i.e. 0 and 1), this scheme assigns a unique value for each possible case.5.ScalingFeature scaling is one of the most pervasive and difficult problems in machine learning, yet it’s one of the most important things to get right. In order to train a predictive model, we need data with a known set of features that needs to be scaled up or down as appropriate. This blog post will explain how feature scaling works and why it’s important as well as some tips for getting started with feature scaling.After a scaling operation, the continuous features become similar in terms of range. Although this step isn’t required for many algorithms, it’s still a good idea to do so. Distance-based algorithms like k-NN and k-Means, on the other hand, require scaled continuous features as model input. There are two common ways for scaling :Normalization : All values are scaled in a specified range between 0 and 1 via normalisation (or min-max normalisation). This modification has no influence on the feature’s distribution, however it does exacerbate the effects of outliers due to lower standard deviations. As a result, it is advised that outliers be dealt with prior to normalisation.Standardization: Standardization (also known as z-score normalisation) is the process of scaling values while accounting for standard deviation. If the standard deviation of features differs, the range of those features will likewise differ. The effect of outliers in the characteristics is reduced as a result. To arrive at a distribution with a 0 mean and 1 variance, all the data points are subtracted by their mean and the result divided by the distribution’s variance.Learn More about Feature Engineering Techniques

Read More

Low-Code/No-Code: From the Perspective of Frontend Intelligence

Code - No-code

Today, the “human-machine collaborative programming” frees software developers from tedious tasks of assembling UI elements and writing business logic for the transition to other tech-intensive work with business abilities, basic abilities, and bottom-layer abilities. For more information, please see: Frontend Intelligence: A New Way of Thinking.

Read More

Machine Learning: For the foreseeable future

Machine Learning - near future

There are various problems and tasks in the world that are very hard to solve through just programming computers or through traditional methods and explicit instructions making computer games, phone apps or desktop applications are very doable through normal means whereas making a machine that can beat the best human in the computer game or making a car that can drive itself where having a computer recognize objects are not so simple, these are not things that you can easily just tell a computer to do. One way around this is time the computer how to learn and having a figure out how to get better through lots of practice or in computers case lots of data. This is machine learning.

Read More

ML-Ops: Operationalizing a Machine Learning Model, end to end

End-to-end ML

As the Machine Learning (ML)community continues to grow, we want to deploy and serve our models better. ML deployment faces the general issues in the deployment lifecycle of any software application PLUS(+) an additional set of ML specific issues. To build a machine learning to a specific use case we perform data collection, feature engineering, modeling building, model evaluation.

Read More
1 2 3 32