Most organizations have a lot of data, but not all of it is useful. In fact, a lot of it is unusable because it's unstructured. Watch this webinar replay with Manish Rai, VP of Marketing at super.AI, and Brad Cordova, CEO at super.AI, as they discuss how unstructured data processing (UDP) unlocks unusable data at scale and puts hyperautomation within reach.
You'll learn about the relationship between unstructured information and intelligent automation and find out how you can turn your organization into a data-driven powerhouse.
Host [00:00:01] Hello and welcome back into the Intelligent Automation Network. Here we are, Hyper Automation Live the Solution Showcase. We very much appreciate you being here. We've had great sessions thus far and this is no different, although it's completely distinct. We've got achieving hyper automation with unstructured data processing. UDP get used to it. It is super AI. It's our old friend Maneesh, right. Our new friend Brad Cordova with Super eight. Gentlemen, thank you so much for being here. Thanks so much for doing this.
Manish Rai [00:00:36] Thank you. Thank you for having us.
Host [00:00:38] We'll see it live.
Manish Rai [00:00:41] Excellent. So. Moving on just a little bit about my background. Most of you might be familiar with me. I've been in the automation space for the last four years and I've been on the intelligent automation networks many times. I started about 40 years ago at Automation and anywhere, and among my responsibilities at automation anywhere was to launch IQ board product. Before we had intelligent document processing as a category. And fast forward four years today. Intelligent document processing is a well-established category out there. And and I think I ran into Brad and a short while back and I could see the future that once people have tackled documents, there's a rich set of data that is not being tackled. And I was really intrigued by the team and the technology has put together. And so here I am talking to all of you about unstructured data processing. So that's a quick introduction for me. And before we dove in, Brad, would you like to introduce yourself?
Brad Cordova [00:02:04] Sure. Everyone, I'm the CEO and founder of Super AI. My background, I was doing my Ph.D. at MIT, trying to amalgamate symbolically AI and machine learning together. And then during my Ph.D., I founded a company called True Motion, and we built the back end for some of the biggest mobility companies around the world, like Lyft and Uber and Progressive Insurance. We achieved unicorn status, and so that was a very interesting ride. And a lot of the things we learned there about building I was formed the genesis of Super AI and so really excited to talk with you guys and go over what, what we are doing here at super I.
Manish Rai [00:02:49] Excellent. And what Brad omitted is that while at true motion, he was featured in the full Forbes 30 under 30, the next generation of entrepreneurs emerging. So excited to be partnering with Brad to help build this category. So we have a presentation lined up about we'll talk about the challenge and to go over a use case, the solution, you'll get a quick demo from Brad and then we'll have some all of us intermittently mingle them to keep it engaging. So let's dove in there. Many of you might have seen some data around around the stat that we're talking about, that 90% of all the world's data was created in just the last two years. But what is even more interesting is when you look at the purple line at the bottom, that is the structured data and it is growing linearly, but it's the unstructured data where most of this growth is happening. In fact, 250% since 2018 alone, and it's not stopping. So what we have been doing, if you look at it, R.P., has been all the rage, but it has just scratched the surface of all the data we have with structured data, and now companies are transitioning with that and including intelligent document processing as part of the toolkit. But to truly achieve hyper automation, you need to tackle all the rest of that data. The 80% of the data that is that is not accessible to automation today, and that includes unstructured documents, images, video, audio, text and satellite images. And that's what our platform is built to tackle. So at this point, moving forward to our first bold question over to you set to to run the board.
Host [00:05:04] Five. So we want to know here the processes with what type of unstructured data would you like to automate in the next 12 months? So we've got a few choices image, video documents, audio, text, or none of the above documents with an early lead here. Many.
Manish Rai [00:05:25] Great. You want to. Shall we pause to see the results or we'll share the results towards the end?
Host [00:05:36] Let folks continue to to add their thoughts. Image now has a couple of boats as well.
Manish Rai [00:05:45] Excellent. So now let's talk about look at the use cases. We've done hundreds of proof of concepts that the global 2000 companies around the world. And what you see on the slide is rose represent the type of data. And the columns are categories of use cases that are beginning to emerge across industries. One common use cases around reduction of data or anonymization of personally identifiable information. And that can include faces, license plate numbers and images and videos. It can include Social Security numbers, addresses, names in documents and text, and even hiding the tones of people in audio. The second use case we encountered a lot is around data extraction. It could be looking at images of building equipment and extracting the nameplate data from it serial numbers, model numbers, etc. from videos. It could be extracting of license plate number. What brands are being displayed out there. And just name entity extraction outside of unstructured from unstructured text classification is the next use case. Looking at what type of product does this image belong to? What type of object are featured inside the image? And in case of text, what type of request is it? What is the sentiment of the text? And then moving forward, we get use cases where we have to answer some questions about the data. It could be counting the number of objects inside an image or a video, looking at crop and assessing the quality of the crop or the extent of corrosion inside an image. And identifying is is this document classified? Is the information urgent? So when you look across industries and look at more of the vertical use cases, what we find is testing, inspection and certification, which is done on a outsourcing basis and sometimes in-house is is one of very big use case because today when you look at it, the inspection is highly manual process where an inspector could be visiting a supermarket, taking images of the equipment inside or images of the building, and then they have to extract the name plate data from those images. They have to assess the damage from the building images and that can all be now using unstructured data platform automated. Similarly in insurance, you could be taking images of vehicles repossessed after an accident and be accidentally capturing faces of people walking by license plate numbers. But to meet the GDPR and other requirements, there's a need to anonymize information not only in documents, but images and videos and even audio. And assessing damage is another common use case. A lot of emerging insurance companies want to automate the whole process, and that's a use case we are encountering in retail. When you're looking to list a new product offering, there are a lot of questions that need to be answered about the quality of the listing and the quality of the images inside the listing. And Brad will go deeper into that use case agriculture, looking at coil, crop and soil monitoring, disease detection and in technology, a very interesting use case is a company is launching a new virtual reality headset and they had to assess the sentiment of the user just by observing and looking at the video of the user's eye and nose. And we had to create algorithms to solve that problem. So we have interesting use cases covering all the data types across various industry. So now that. It can pose questions set if you want to trigger that all at this point. So I think let me activate the second.
Host [00:10:53] Were there like a lot of use cases do you have for your unstructured data. So there's I data extraction categorization answering questions about the data or. If it's possible. None of the above. So what category of use cases do you have here? We go to an extraction categorization, out to early leads.
Manish Rai [00:11:16] Exactly. Not unexpected, but those are some of the use cases we encounter most commonly. So with that, I think this is a good segue way for me to hand over to Brad to take you through the rest of the session.
Brad Cordova [00:11:33] Cool. Thanks, man. So what I want to do now is take you through a specific use case, give you a demo and give you a taste of exactly how this technology works. So I'm going to go over a case study of one of our customers, the world's second largest e-commerce company. And so they have millions of products on their e-commerce platform, and they obviously need to know what's going on in these product listings. In fact, there's 55 things they want to know, and they have a very precise model which ties the quality of the product listing to the top line revenue. And you can imagine, for example, if the image of the product does not match the description of the product, you likely wouldn't buy it and this would directly affect their top line revenue. So this turns out to be quite a difficult problem, and the only way they are able to solve it is with a pure human process. And since they're using humans, it took about 15 minutes each product. They didn't actually know the error rate. It was about $3 per product. And so because it was so slow and so expensive, they could only to about 1% of the products on their website. And so that led to them having a 99% liability, which was really, really painful and unnerving for them. And so for use cases like this, this is exactly why we built this platform to process unstructured data. And so let me just give you an overview first of how this works. So what you do is you can first input your data from any source. We make it really easy to integrate your data, whether it's from API, from our system of records, your CRM, etc., etc.. The next thing we do is we break complex tasks into simple tasks. And you could think of this like an assembly line. So what did Henry Ford teach us? He taught us that if you try to build a car into it and it's slow, expensive, error prone, whereas if instead of having one person build a car, if one person just puts the wheel in a car, IMU on a car, this changed the world. And so we take this really old idea and we bring it to the age of 80, to the digital age. And this turns out to be really, really powerful and and I think undervalued. So the next step is these simple tasks go into a router. And what the router does is it routes to a human in software box. And this turns out to be really important. Typically, if you want to solve a problem, you'd have to choose one of these to use it. Maybe you're using a BPO solution and you're outsourcing this to humans, or maybe using Google or Amazon or Microsoft's AI technology or some open source or using an RPA solution. What we found is that the right way to do this is not to say or but to say. And for example, the best chess players in the world aren't human chess players. They're not chess players. They're human plus A.I. chess players. So this is exactly what the router does. And so after the router routed to these different worker types, each worker solves a simple task and makes a prediction. And there could be, in general, dozens of predictions. And so that's why we built the Combiner, because you need to intelligently combine these predictions into a single output, because not all predictions are created equal. You can imagine that some human experts may be more trustworthy than a random A.I. model. And so on. The output of the combiner, you get a really high quality output and we assure this with 150 plus different quality assurance mechanisms. And this turns out to be one of the biggest value propositions that our customers say to us that we can guarantee the quality. This is game changing for for all of us who use machine learning. Despite how powerful it is, we know how much of a pain in the butt it is and how it can fail silently and you just. If the data distribution changes just a little bit, all your guarantees go out the window. It's an absolute headache. And so it's it's really valuable to to our customers and partners that we can actually guarantee the quality. And the last step here is the trainer. And what the trainer does is it takes this high quality data and uses that data to teach new AI how to solve this task. And so what happens is as you process more data, more of your data, it automatically gets faster and cheaper. Because in the beginning, in order to guarantee this quality, maybe the router needs throughout the humans, but over time only about the AI and bots. But not only that, it actually gets higher quality on your specific data. So I didn't mention it, but the router, the Combiner and the trainer are all AI algorithms. So the router is a reinforcement learning algorithm called upon the parts of their Markov decision process. The combining is a generative model and this is supervised learning. So in addition to getting faster and cheaper, it gets higher quality.
Manish Rai [00:16:45] So Brad's quick. I sharing your slides because I personally can't see it.
Brad Cordova [00:16:52] I am sharing the slides.
Host [00:16:56] I also cannot see them. Right.
Brad Cordova [00:16:58] Okay. Let me reassure.
Manish Rai [00:17:06] But yeah. If you could just walk them because we didn't see a single slide. I didn't want to interrupt your flow, but if you could go back and just walk them quickly of what slides you wanted to show. Very quickly.
Brad Cordova [00:17:23] Okay. Um, sorry about that, everyone. Um, yeah. So just to kind of go back over this, you can connect your unstructured data from, from any source. As I mentioned, you can. What we do next is using data programing. We break complex tasks into these simple tasks like the assembly line and then the router. As I mentioned, routes to humans. I inbox each of those, make their predictions, and in general, they could agree. They can disagree. That's really the power of the system. Is is you get these orthogonal predictions and then the combiner intelligently combines these potentially dozens of predictions into a single. And then the final step is the trainer takes that high quality output and trains new A.I. models as workers. And so then what happens is the more data you process, it gets faster and cheaper because you're essentially replacing humans with A.I. but it gets higher quality because each of these modules learns to do their job better on on your specific data. And so if we we have case study after case study, but in this particular case, this is a very unique problem. And so it started out with 0% automation, meaning that 100% of the tasks were handled by humans. But then just even after only 5000 data points, it was 92% automated and now it's over 99% automated. And so applying are our AI infrastructure to this unstructured data problem. We took the processing time from 17 minutes to less than 1/2. We can actually guarantee the quality. Now, as I mentioned, we dropped the process, the cost of processing by two orders of magnitude. And all of this now allows them to 100% of their products. And this has really been game changing. And if they have saved them $26 million just in the first eight months of using this, and now we're applying this to many different problems for them. So that being said, let's let's jump into a demo and and show you just how this works. So there's many different applications, as many showed. But I'll just go over one image reduction. And so essentially anybody can come and try it out and you can connect to your existing data or you can start with an example just to try it out. And this one is a relatively simple UDP application. So you can upload an image and then you can remove PII, for example. In this case, this model can remove faces, license plates, brands, text and images. But in general, you can remove hundreds of different things. And so what we're doing now is we uploaded the data and chose what we wanted. And so now what's happening is it's going through the data program and it decomposed it into small tasks. You can see that the data points now in the queue, the routers routing to human A.I. in software functions. And you can see that in this dashboard. You don't need to be an expert, a programmer. Oh, it looks like this finished. So it looks like it routed to an AI. So we can review your results and we can see, for example, that it looks like it blurred some some license plates in the in some faces. So I'm going to say this is correct. So we just handled the first data point and added validation to the model. We could have, of course, upload more data from our computer. Or if you want to upload CC's or you have your data stored in some cloud database or on prem database, or maybe you are a developer and you want to use the API, Python or CLI. And of course you can with the click of a button, add new A.I. models to automate this even more. We have the most cutting edge models on our platform, whether it comes from open source from Amazon, Google, Microsoft, etc., etc.. You can add more workers at human AI and software workers and many more things. Um, and so this handles PII removal, but this also, this also works on textual data. So this is similar to a project we work with one of the biggest tech companies handling chat bot data. And you can see here again, this team of A.I. and human working together. The model already made a suggestion of it being San Francisco's location. But but, of course, you can have human inputs as well. I add the restaurant name and you could submit this. And every time I submit this, it makes the A.I. smarter and smarter. And in that loop I talked about this. It's also how it handles image data. So, for example, I can label this car. And what's important is if I. This entire image. This would take a really long time. But you can see that working with an eye it with just two click of a button. I made a really good segmentation which would usually take five, 10 minutes, and so using human and air together just makes the system extremely exponentially scalable. And so we, we handle things like damage detection on, on cars and these can get detected automatically. We also handle transcriptions of license plates. So for example, if you have a bunch of image of license plates and you want to have that in a structured format along with where the license plate is from, you can do that. And of course, this this can handle video data if you have video of car damage detection. So that's a quick example of of how the how the platform works. I hope that was interesting for you and happy to to answer any questions as we go. But we're really excited to just see the insane adoption by leading global enterprises. Even before having a sales and marketing team, there was massive adoption was just increasing. So for example, some of the top tech companies, the second largest credit card company in the world, top ten retailers and gas companies. And yeah, and it just getting started. And of course we want this to be easy to use. You can have a great technology, but if it's hard to integrate, we've all worked in large enterprises and we know that it's really important to have this work with existing solutions. And so that was top of mind when designing this. And of course this was only possible by really bringing in the top AI and ML pioneers, people from the founding team of Google, AI, people who were on the founding team of IWC. So my research colleagues at MIT, we had two of the people who built the deep learning platform and Microsoft Research from Apple, Facebook, etc. And so I'm just very grateful for the team because this is all possible because of them. So I threw a lot at you maybe. Let's just summarize real quick of what makes super, super unique. So first of all, instead of or we say and so we use the best of human and software bots, each one of these worker types has their pros and cons. So, so why make a choice? The second thing is we automatically allow you to automate these unstructured data processes. So we talked about the router, the Combiner and the trainer, and how they automatically improve quality, cost and speed. And so essentially what this allows you to do is get A.I. working at scale in production in two weeks instead of needing 4 to 6 months and an expert team of AP Ph.Ds. In addition to this, I also mentioned that we can guarantee the quality which is important. And then finally, this is really easy to integrate, extend and customize. And so we don't want you guys to start from scratch. We want you to harness your existing RPA, your existing air, and your existing human investments out of the box. And it was really important for us that you could use this in the no code platform if you're a business user, a product person, or you can also use a developer friendly SDK. So with that being said, our, our vision really, um, to use an analogy with self-driving cars is to create a fully autonomous enterprise. So whether you're at level one, which is human processing or a level three where you do have software intervention, we want to take you to level five so you can be fully autonomous process, unstructured data. And and this is really important to us and we're seeing a lot of value in this. And so that being said, we can take our final poll here on this step. You can activate this and, um, and then we can move on to Q&A.
Host [00:27:22] Yeah, we're live with the poll. Brad, thanks so much. We do have a couple of questions here. Number one, can super I interpret contract statements and legal terms and conditions in legal contract documents.
Brad Cordova [00:27:36] Yes, actually, that was one of our first use cases. So we worked with one of the biggest law firms around the world. And the first project was it sounded simple, but it was very difficult for them, was when does when's the start and end date of the contract terms? And then since then we've we've worked on a lot of different things. But for answers. Yes, definitely.
Host [00:28:00] Excellent. You said we've worked on a lot of different things. You mentioned some what, anonymized but very impressive clients. And what came to mind was were there similarities between what those organizations asked you to do? Were there similarities in what you shared with those organizations, what you saw behind the walls a little bit?
Brad Cordova [00:28:25] Yeah, there were. And, um, we've amalgamated, uh, these similarities into four, four product categories. And so we saw these four things. I don't know if you can see my screen, but yes, we, we saw these four things over and over again. So there was a lot of, uh, fintech, uh, insurance companies asking for, to redact a lot of different things. And then there was a lot of companies asking us to extract things not. I think everyone's familiar with ADP extracting things out of documents, but it goes well beyond that and then classifying it. So we have to classify credit card transactions and then and then finally answering questions about things, how many objects. And so well, these are the four things that almost every company we've ever worked with has.
Host [00:29:16] Great and question for. Maneesh, if you were speaking to, you know, Maneesh right from half a decade ago, what's the most surprising thing about what you're doing now versus what you were doing? You might be on mute.
Manish Rai [00:29:35] 45 years ago, we were just starting out on the journey for automating document processing and we were beginning to get use cases across the board and we were trying to find commonality in those use cases. And we started seeing more and more use cases around semi-structured data. And I see us exactly same place today with unstructured data processing. We have seen hundreds of cases and we are beginning to see these patterns emerging of use cases out there today where the visual inspection use case tends to be very common, where people are taking images of building equipment, they are taking images of buildings, detecting the corrosion, extracting nameplate from there, and then redaction as a use case anonymizing. So I feel like the exactly same point there. You know, five, six years ago we were with Heidi and we are on the cusp of the new world of unstructured data processing that we are entering and to help people achieve hyper automation, so to speak, and and become a fully autonomous enterprise, as Brad mentioned.
Host [00:30:57] Perfect place to end. Maneesh, Brad, thank you so much for your time, for your insight, for those in attendance. Really appreciate you being here as well. We're going to start our next session in just 14 minutes in the next room, but one more time. And Brad, thanks so much for your time. Appreciate it.
Manish Rai [00:31:11] Thank you, sir. Bye now.