TECH B2B Manufacturing Matters Featuring Integro Technologies

Winn Hardin, TECH B2B Marketing:

Hello everyone, and welcome to TECH B2B's Manufacturing Matters, a series of short video segments that looks at automation technologies and the trends that are affecting our markets today. As always I'm joined by my good friend and pundit and account executive here at TECH B2B Jimmy Carroll. And we're also lucky enough to be joined by David Dechow from Integro Technologies. Now David has been a Lifetime A3 Achievement Award winner as well as an instructor for the Certified Vision Professional program.

Today we're going to talk about artificial intelligence and manufacturing, so without any further ado, Jimmy do you have a question to kick off for David?

Jimmy Carroll, TECH B2B Marketing:

I do yeah, thanks Winn and thanks David for joining us. It’s always a pleasure to talk to you. So David first question, let's get right to it. We hear a lot about deep learning being a black box. That's one of the main concerns of integrators and OEMs when it comes to deep learning adoption and deployment, right? Are we getting control of that black box from the systems integrator perspective or is it still like lighting was described early in machine vision as sort of an art?

David Dechow, Integro Technologies:

That's a great question, and it comes up comes up quite often. I particularly enjoy the way this is worded where we're kind of comparing it to the magic art of illumination in machine vision. We always have the argument, “Is it science or art?”, and it probably is a lot of both. In the case of deep learning and that deep learning black box, it's not magic, it's not art, but it is still a black box.

To be fair, data scientists who are well versed in the exact working of a particular deep learning model or a particular deep learning inference implementation certainly can do some tuning inside the workings of the inference engine and inside the model itself but all in all from a point of view of us as end users and systems integrators of deep learning software, deep learning tools, the process is really a black box. It's very difficult to see into the workings of the individual neural nets or perceptrons and knowing what they're doing and why they're doing is not exposed to us as integrators or end users.

That said we have certainly some mechanisms that we use regularly to overcome that and that is of course tuning the data, making sure that the labeled data are all correct for our application, making sure that we fine-tune that over many, many iterations and maybe many, many thousands of images and that becomes the way we ultimately achieve a level of confidence in the reliability of the system.

I think from the customer point of view, this becomes somewhat of a challenge to buy into and to understand in that even though you can do some early proof very easily, it's still, in a deep learning environment with a deep learning tool set, just plain not feasible to know what the final outcome is going to be of the deep learning system and of the deep learning results for a particular application. And so we do spend a sort of a phased implementation approach in making sure that we ultimately know what we can finally achieve.

And just a quick word again, I’m going to go back to that lighting comment. One of the things I think that's been a huge challenge in deep learning as a tool for machine vision--and others may call it computer vision, but it's still the same thing once we think about it in an industrial setting--and that challenge is that the typical computer vision implementation in the consumer environment--let's say autonomous vehicles, autonomous robots--doesn't really try to worry about the illumination, about the lighting you mentioned. It doesn't worry about the lighting, doesn't worry about the imaging methodologies, and doesn't necessarily worry about this one-to-one relationship between part and image that we find in a heavy industrial environment, and in most industrial inspection situations. And so the importance of imaging just plain doesn't go away when we move into deep learning and I think that the marketplace is starting to learn that the vendors of the deep learning software are starting to understand that we need competent imaging, we need reliable and flexible imaging in order to do a typical online process in industrial automation.

Hardin: David, from the system integrators perspective, are there certain tools or functionality in the existing AI deep learning solution sets that are missing? That if we had them would make it easier for a system integrator to feel confident adopting the technology or for scaling it out to more and more customers?

Dechow: Well, I think that there’s a lot of effort in the industry, and by industry I mean the marketplace of deep learning and those providing that solution set, I think there's been a lot of effort to make the interaction between the end user or the integrator, and the collection of data 1) easier, 2) a more reliable and robust process, and then 3) to have the software, the HMI, the graphical user interface that gives the end user the capability of doing that initial categorization or labeling of the images for the data set much, much easier. So I think that that's one of the things that it's not necessarily missing but it has been slowly developing and moving forward in this type of industry.

The other thing I'd say just plain out is in a deep learning environment, I like the word tool set and you mentioned tools and you'll see that I use that word quite frequently. Deep learning is not a replacement for machine vision and I think most experts in the field are recognizing that and saying that it's not something that is a standalone replacement for machine vision. I think the strongest tool set in a deep learning environment is a tool set that contains a usable or useful collection of the let's say the analysis tools, the discrete analysis type of tools that have been used in machine vision for five decades and combine those in a hybrid environment or a hybrid solution to be used with the deep learning tools and gain a much better and a much more robust result in the end. Some scenarios would be to have the discrete tools, the analysis tools do some pre-processing and even segmentation on the image to zero in on the items that need to be worked with and then use deep learning for the things it's really good at: classification, segmentation, and location in some cases.

Hardin: If I could follow up on that, so we know that there's a few packages out there that incorporate both the traditional IDE, 2D, 3D vision, 2.5D and deep learning capabilities, but the majority of them are still AI deep learning packages, surely by the number of offerings in the marketplace right now. Then we've got the separate traditional IDE. So do you feel that today we have the tools in place to make it easy for these two software environments to speak to one another or does the market need to do more development?

Dechow: Yeah I think that you do find that in some products. I'm not going to pinpoint any of them but you can go into the marketplace and in a you know tight selection of individual products, just as you suggested, you do see that capability in that offering. As an integrator, we and other integrators that I know and talk to frequently do this on a regular basis in the solutions that we provide as more of a custom solution that might contain deep learning tools along with other tools. And will this propagate, I'm not sure that I can say that confidently. I think it's something that needs to propagate in the environment so that deep learning gets its best use in particular applications and its most successful use in specific applications.

There's a market stigma unfortunately with respect, not necessarily to deep learning right now because of all the hype and buzz around it, but there's a market stigma about machine vision. I read a commercial article just the other day that mentioned cheerfully that machine vision had been around for a few decades, actually five decades, but that most of the work in machine vision was just primitive analysis, very primitive software, and of course that's not the case.

Machine vision has tremendously advanced software and advanced tool sets and I think that the vendors in the DL, the AI/DL environment will start to embrace that as they see the real needs of the industrial environment and what the customers really have to execute and execute reliably on their plant floors.

Carroll: David, on the thread of deep learning finding a good use case in specific applications, like in manufacturing for example, what are the biggest roadblocks that you've seen when it comes to successful deep learning or AI adoption in manufacturing?

Dechow: I'm not probably telling you anything that's new, but it all starts with the correct specification of the tool for the application. As we suggested earlier, there are a set of fairly finite applications where deep learning is very, very well indicated. Applications where subjective analysis of the defect or of the part or of the assembly has to be done in order to determine whether it's a good part or a bad part, or a good assembly or a bad assembly. And that subjectiveness particularly when there's confusion in the background scene or confusion amidst the scene with other parts, other surface features, depending on the application, deep learning is just so well indicated for those.

I've actually done some experiments here in my lab where I've attempted to parallel some of the deep learning test data sets in the industrial environment with pure discrete tools and my results are what you'd expect. I sometimes can create a discrete tool or discrete algorithm in code that will actually do similar or as well as deep learning, but in many, many cases those data sets where they have very confusing scenes, very subjective scenes, deep learning is definitely the targeted technology so it all starts with the correct specification. And with that I think that we can be confident that machine learning, deep learning is going to be successful. The fact is we have to accept facts. Again not trying to compare one against the other, but deep learning is not going to be right for every application. It's not going to solve every application reliably. Just like certain tools that I use every day in regular machine vision, let's pick a geometric search tool, that tool is right for about, you know, 20 or 15 percent of the applications I might work with. I will find that in deep learning as well. There will be a set of a subset of applications in industrial machine vision where deep learning is the absolute correct tool. And it will be a percentage subset rather than a full-blown replacement of the discrete analysis tools.

Hardin: So we've identified that data-centric approaches are very important to being successful with deep learning. That it's very difficult to go in and actually tweak the model, it's usually a better approach to look at the data labeling and the data sets that are coming in. That is directly related to the specification, the quality of the specification, the availability of a training set to be able to determine what's doable with deep learning versus traditional. We talked about how deep learning, a new technology, is going to actually need to walk up the back, so to speak, of traditional machine vision. For it to actually reach its potential it needs to stand on the shoulders of traditional IDEs and all the work that's been done for 50 years prior to that.

Dechow: Very well put, yes.

Hardin: Thank you. The last part that I wanted to ask you about is something that doesn't get talked about a lot in public and that's the edge deployment versus a cloud computing, you know whether it's for training or for deployment--we get a lot of talk about 5G. And then I think something that's related to this is the question of security because I think a lot of manufacturers have some concerns about uploading their data to third-party compute systems. So what do you think about that?

Dechow: I'll start with the last part first, I mean it absolutely is an ongoing challenge and probably has been ever since local servers became popular in industrial environments. It has been an ongoing challenge to 1) convince the end users that using a cloud facility and then having access to that cloud facility is really, really safe for their application and for their entire factory. And I don't think I can really tell you a number but just anecdotally I'd say a very good percentage, more than 50 percent, of the companies that I might talk to are not completely on board with uploading all of their images, all of their data for a particular application like a deep learning inspection application to a cloud environment. Now it does happen and one might say I'm a fan of that. I think at some point perhaps uploading the images and a deep learning application where the cloud serves not only the repository but the retraining, maybe even automated retraining, of the data of the models, thereby creating a self-tuning system. It's a great goal to try to achieve. I think that further education, further acceptance by the end user community in the industrial environment is going to be necessary in order to make that happen.

We talk about the edge versus the cloud a lot and really when you get right down to it machine vision, robotics, controls these are edge technologies. They simply are edge technologies and any attempt to try to extract them from the edge I think is going to meet with some objection, not only by the integrators but by the end users and just in terms of whether or not it's going to be successful in an application. These processes really need to reside at the edge. In an inspection application there are certainly some dramatic examples of using cloud-based for other types of learning applications: predictive maintenance, safety, and so on, but for the inspection type application I think a lot would agree with me that has to reside at the edge.

Now just to comment on that though, we see the emergence of components for deep learning, targeted for deep learning, that in and of themselves are edge-based components. And it's not too surprising that we would see inference components or let's say a small smart camera that is able to do its own inference right at the edge and without a PC, without a heavy duty processor. I think this is something that's emerging. I think there's been acceptance for that and we'll see if that kind of technology thrives in terms of deep learning in the industrial environment. I will say I've even seen a new component very, very recently that purports to provide not only inference but also the entire training and classification of a model right at the edge. In other words, a smart camera that can collect all the images, have them labeled in some way, shape, or form, do the training right on that camera while it's processing other images and then go on and continue with the inference with that model. I mean I think these are very brilliant implementations and directions to go. I would only say that I'm not quite sure yet how that will pan out in the marketplace and we'll have to see. But definitely edge is edge and we in machine vision are an edge-based entity still and probably will be for the foreseeable future.

Hardin: We can't wish away latency, especially in safety-critical applications.

Dechow: Exactly right.

Hardin: David, thank you so much for making your time available to us today, sharing your expertise with the community, as well as with our team. We really, really appreciate it.

So until next time and our next episode, thanks for joining us today and we'll see you at the next TECH B2B Manufacturing Matters.