AI in physical security - Learning, retraining, automating
August 27, 2021
Guests
Joseph B. Fuller, Professor of Management at Harvard University
Florian Matusek, Product Group Director and Managing Director at Genetec Inc.
Description
Can intelligent machines replace physical security operators? How far along is machine learning technology? In Engage Episode 9, two thought leaders join our hosts to discuss these questions. Professor of Management at Harvard University, Joseph B. Fuller, and Product Group Director and Managing Director at Genetec, Florian Matusek, share their thoughts. Tune in as we discuss the effects of automation in physical security and who’s responsible for ensuring its accurate and ethical use.
Transcript
David Chauvin: Welcome to Engage: A Genetec podcast.
"The primitive forms of artificial intelligence, have already proven very useful, but I think the development of artificial intelligence could spell the end of the human race is limited by slow biological evolution, couldn't compete, and would be superseded." - Professor Steven Hawking.
David Chauvin: When talking about artificial intelligence or machine learning, the concerns of many are summed up in that famous quote from Professor Stephen Hawking in his BBC interview from 2014. Professor Hawking and many other leading tech giants, scientists, and economists have questioned the universal consequences of A.I.'s impact.
Kelly Lawetz: Thankfully, Professor Hawking's prognostications about A.I. overthrowing the human race have not come to pass. But there's no doubt that the adoption of A.I. in our every day is transforming our world. And that's what we're going to get into today.
Kelly Lawetz: I'm Kelly Lawetz,
David Chauvin: And I'm David Chauvin.
David Chauvin: The most tangible application for A.I. and automation exists in the workforce, where advancements leave many worried for their jobs.
Kelly Lawetz: But just how disruptive will the move to A.I. and automated work be for physical security? Could it help operators become more efficient when dealing with the deluge of data? We speak with two leaders, intelligent automation, to understand what's required, what's real, and what's at stake when deploying this technology.
"I run a project called The Future Work, where we look at various elements of developments in the workforce and the implications for policymakers and executives." - Professor Fuller.
David Chauvin: That's my guest, Joseph Fullard, professor of management practice at Harvard Business School. His decades as a consultant for Monitor Deloitte have brought him to the forefront of developing strategies, applying technology, data recruitment, and workforce optimization. I speak with Professor Fuller in the first half of our show.
Kelly Lawetz: Followed by my interview with Florian Matusek, product group director of video analytics at Genetec Inc. As a leading technologist in intelligent automation, we speak with Florian to understand what's required, what's real, and what's at stake when it comes to deploying this technology.
David Chauvin: But first, I talked to Professor Fuller, whose work as the co-leader of Harvard's Managing the Future of Work Initiative speaks to the realities of A.I., automation development, and its impact on the workforce.
Interview with Professor Fuller
David Chauvin: I started our conversation by asking him where he thinks automation and I will have the most influence in the years to come.
David Chauvin: So for you, what are some of these new categories or these new industries that maybe not in the mainstream conversation yet, but where you see automation becoming bigger, where we thought those industries might have been immune, just a few years ago.
Professor Fuller: If you think of automation very broadly encompassing things like the development of artificial intelligence as well as physical automation, like sensor networks and robotization Internet of Things, I think we're going to see the displacement of a type of talent that hasn't generally been affected by automation before, specifically workers that do non-routine cognitive work. What I mean by that; cognitive is pretty straightforward. It's a more strong mind, sound logic, good numeracy, and communication skills than a strong back. But non-routine work in which the variability around what we'll call the main sequence task is pretty significant. There are eight, ten, twelve types of skills and tasks your boss might call upon you to do in a somewhat unpredictable fashion at any time during the workday. The vast majority of work historically has been routine. Work with that variability is relatively narrow and therefore has been a prime target for automation. If you go into an auto assembly plant today, it's much less manpower-intensive than it was historically. And if I can do the dirty, dark, dangerous jobs in a plant like spraypainting cars or things that are otherwise subject to being fully automated with today's technology. So it's not as significant a risk to the point of view. So what I start to do is cut into big chunks of what those non-routine White-Collar cognitive workers have done. So whether it's a job like an actuary or attorney or supply chain manager, well, there's a veneer of uniqueness about each transaction that they do. A tremendous amount of what they do can be isolated by A.I. and done by A.I. with a lower error rate than humans make.
David Chauvin: So the core is repetitive enough where employers can automate it?
Professor Fuller: You're getting there to the next generation of what's often called in the literature; cognitive A.I., where it doesn't rely on structured learning to understand its task. So we're getting more sophisticated about how A.I. can learn, either in an unstructured fashion or through interfaces with that white-collar worker where the A.I. generates recommendations and the worker decides. And when you get any calls, few hundred of those types of transactions, A.I. has learned the parameters that govern the heuristics, the human workers, applying those decisions. And yes, it will still make the occasional error, which points out, I think, a pretty important thing, David, relative to the history of technology and where we will go. The deployment of technology has invariably displaced jobs while creating new jobs to support the technology. So the net attrition of work has not been 100 percent, not nearly. It's often been 50 percent, 60 percent. Now, the skills required to get the most out of that technology are often different from the incumbent worker's skills, which brings us to all sorts of interesting questions about things like reskilling.
David Chauvin: Short-term value creation, that word is the primary driving factor for most decisions, and especially in public, publicly traded companies are all that short term value creation, but that's maximizing profit employment. Suppose that's what drives the decisions of executives, owners, investors. How can we expect them to make decisions benefiting society long-term versus using technology to extract more short-term value resulting in their benefit?
Professor Fuller: I think, if I may, not to get too philosophical with you, but I think an examination of cognitive psychology will show that human beings make short-term optimization decisions such as businesspeople. I'm also going to push back a little bit on the concept that technology is the adoption of it is going to be a negative thing for workers disproportionately. Technology has done this to a great degree in sectors like mining and types of manufacturing, and I mean, dirty, dark, dangerous jobs get displaced because of liability and general human decency and trying to prevent people from being in dangerous situations. The biggest category of work that employers will automate away is a fourth D, and that D is dull. Your job, which is deadly dull right now, maybe the way you pay your bills, makes it much more enjoyable because you're able to spend hours on the more valuable part of it. Or maybe you can go on, do something else.
David Chauvin: That was a fantastic conversation. Thank you very much, Professor Fuller. I appreciate it.
Professor Fuller: A pleasure to be with you. Thanks for inviting me.
Interview with Florian Matusek
"Connected to privacy, we also have this topic about artificial intelligence, or we call intelligence automation. I'm also thinking about the Artificial Intelligence Act, by the European Commission, which was just released in April where they are preparing rules about how I'm supposed to be used, the way it's supposed to be used. This is also being tackled more and more on the regulatory side." - Florian Matusek.
Kelly Lawetz: And that's my guest, Florian Matusek, group director of video analytics at Genetec. He received his Ph.D. in information processing science and is the co-founder of Kiwi Security, a leading developer of video analytics and video control solutions, and an intelligent automation expert. Florian's video analytic technology for Kiwi Vision Privacy Protector was awarded the European Privacy Cele Euro Prize. In the second half of the show, I talked to Florian about the real-world application of intelligent automation; we dove deep into what current advancements mean for workers in endangered sectors and the impact of data collection being bias.
Kelly Lawetz: We kick things off by talking about the positives of automation and the evolution of the workforce in the wake of all these technological developments. There's a lot of hype and fear about A.I. and how it will replace jobs historically done by humans. What I want to know are the real-life applications of this technology. What does it look like in the workplace?
Florian Matusek: So A.I. is great to automate routine tasks, so things that are repeating all the time can really teach a machine. So everything is routine. For example, watching the video screens of an operator is where we can help them and filter out the interesting information for the operator. So the operator can make actual decisions based on the data, and we take away the task of watching the video. So this is a perfect example where we can help operators with routine tasks and at the same time also produce new kinds of jobs. For example, in Montreal, we have a dedicated team and our other group of labelers. Those people are still watching the videos and creating data sets for our license plate recognition. That is an entirely new type of job that, in fact, in Africa already takes a considerable part of the economy. It's just labeling for a year for Google, for Microsoft, for Amazon. And this wasn't there before. And I find it interesting because we always talk about new types of jobs in terms of highly skilled jobs, which are produced as well. Still, we also create a lot low-qualified employment for people who might not get a job because of A.I.
Kelly Lawetz: Is it everything we predicted, and should we all be scared for our jobs?
Florian Matusek: So one of the things I think that we're seeing is something that always happens with the hype. In the beginning, everybody's super excited and is expecting that the trend is continuing the same way it started. And we see now with A.I. that this is not the case, as always. And we're reaching a certain plateau where things are not getting more accurate and more and more accurate. But we find ways on how we can leverage this technology in real life beyond the hype. I believe there are many applications where A.I. can make the lives of our operators in the security industry more efficient and easier to automate. But we have to be realistic about what works and what doesn't work. And I think this is a crucial point for all end users, all our partners, to understand the technology and see where it can help and where it cannot help. Right. We saw last year with the pandemic that there were a bunch of applications based on A.I. that can help in our day-to-day operations. And I divide them into three different levels. So there are the ones that help automate things and help our operators, which somehow help but still need some user engagement. The third category is ones that were just overhyped and, in the end, didn't turn out to have practical applications. In the first category, I would count things that help you manage occupancy rates in spaces. It was crucial to know how many people are in a particular area in the pandemic and how many people go into a shop. So you could, for example, turn on the red light if there are too many people in an area. So using video analytics, people counting for this makes sense. In the second category, I would see things like social distancing, where A.I. was being used to detect how close people are walking to each other. This is useful, but if you go and talk to operators, it's not so valuable anymore for you to do it entirely automatically. In the end, the A.I. doesn't know who belongs together and who is allowed to walk together. I had an interesting conversation with a professor from Amsterdam, a criminologist who works with Amsterdam's police, to analyze the output of A.I. to figure out who belongs together and who doesn't. And it's a very excellent example where you combine A.I. with humans to drive value for the businesses and police, in this case. And then there's a third category of the applications that boiled up during the pandemic that didn't make any sense. I would count there all these thermal cameras to do temperature checking and so on. This technology turned out to be a big hype that was more dangerous than helpful.
Kelly Lawetz: And what role do you see A.I. playing in alleviating burnout?
Florian Matusek: So burnout with operators is a big problem. A recent study showed that 70 percent of operators suffer from burnout, and it's no wonder. We create and install new sensors, new information systems, and video analytics that analyze data. But the result of this is that there is so much data coming towards the operator all the time. You get flashing lights and alarms all the time, which creates anxiety that is just not healthy in the long term and creates burnout. So when we introduce new systems, new A.I. systems, we shouldn't only think about what they can detect on a technical level. But we also need to think about how this information can be consumed. Do we have systems in place that manage data prepared for the operator to filter through and only provide the exciting stuff? Do we have dashboards? So we need to think about both things, creating the data and managing the data, so we don't overload our operators.
Kelly Lawetz: The last couple of years, I'd say, especially during the pandemic, there's a lot more awareness, and discussion around systemic bias and racism in our culture and how technology created, statistically speaking, by white men can reflect that bias. What is the responsibility of businesses and end-users to verify the suitability of the data used in their applications?
Florian Matusek: The first thing is that there can be and it likely will always have some bias. This is because machine learning is based on data and data sets. These data sets are based on actual decisions of humans, and all humans are biased in some way. So I think there is no way to avoid bias altogether. But what's important is to realize that there is bias because once you realize that there is bias, there are ways to counter counteract it and do something against it. I believe that there is definitely a point that end users and users in general of machine learning systems have a particular responsibility to ask what the biases are in the system and what data sets were used to train them. And this is not being done today for several reasons. One of them is that there is not enough awareness of this bias problem. Today we're all about making this next thing work. It's all about features and cool stuff. But we have to come down a little bit and think about these things in the background and build this awareness around bias. So that's I think that's one problem. The other is that there is no way today or no sense to provide this information from the manufacturing side. What kind of data sets are in there? I've been talking personally to many manufacturers. One of my standard questions to any video analytics company that I speak to is your data sets? Where do you get two data sets? Do you produce them yourself? Do you buy them? Are they public ones, because they're also licensing issues, which is very confidential data for many manufacturers. So I think this is also one reason why this is not being discussed today, but it's something that needs to change in the future. Manufacturers need to be much more open about their data sets. They need to be more open with what they do to balance their data sets and avoid or fight against certain biases. It's also on the end-user side to demand it. Because if there's no demand for manufacturers, the manufacturers will also see no reason why they should do it. So definitely the responsibility on both sides.
Kelly Lawetz: But how do we push that forward? Right. So when we wanted privacy that started to be implemented, when we had the force of law and regulations behind confidentiality, there are consequences. So how do we do the same around, I guess, transparency and accountability around the data sets we're using?
Florian Matusek: So with privacy, it started a few years ago with GDPR in Europe. That was essentially the blueprint for many other governments, the CCPA in California, Brazil, Singapore, etc. Still, it was an initiative by the European Union that spearheaded this. And similarly, they're doing something right now with A.I. as well. In April, the European Union proposed a new regulation for artificial intelligence that targets how artificial intelligence is supposed to be used. So this is targeting the application side and at the same time is a separate track dealing with the concept of so-called trustworthy A.I. This is a concept that's it's relatively new, and not many manufacturers have embraced it yet, but they will. Essentially this means that to have any idea how trustworthy your A.I. application is, it has to fulfill certain principles, one of them avoiding biases. Another one is always keeping the human in the loop, not making automatic decisions, and providing transparency. All these different kinds of concepts are out there. Groups are working on this in the European Union. The Mozilla Foundation is an excellent example. Of course, it is embracing this with everything they do with the Mozilla browser, and other companies are jumping on this. So trustworthy A.I. is a topic of the future that's just starting right now. But it will be at the core of any A.I. application that we build in the future, I'm sure of it, anD there will be regulation around it. I think maybe you have heard of it this article in The Guardian from Kate Crawford from Microsoft. She was saying that A.I. is not artificial intelligence and so forth. A very nice line. Anyway, her point is that A.I. is everywhere, but it includes so much more than A.I. It's also all the processes when you order toilet paper from Amazon. It's not only recognizing your voice but triggering so many things around it that A.I. is something different today than it was five years ago.
Kelly Lawetz: And so what's the talk at Genetec?
Florian Matusek: So we have embraced it without knowing it before. For example, in my personal in my team, we developed video analytics applications. We have one guy who is dedicated to working with the data sets and balancing the data sets. So what he's doing is looking at our data sets and seeing how many different genders we have ethnicities, and so on to make sure that it's as balanced as possible. Of course, this is to avoid biases, but also to make a good product. Right. If we sell our product in Brazil, we want to make it work the same way in Norway. So it is really in our interest to make it as balanced as possible, to make it as general as possible.
Kelly Lawetz: When we look at these data sets, there are two different kinds we see on the market, the academic theoretical applications in universities and the real world, practical applications, and business. Is there a way to foster a better relationship between these two applications to produce better, more accurate data sets?
Florian Matusek: Yes, I agree. If we look at data sets, there are two big problems. On the academic side, you have the big problem of the lack of real-world data. So you see a lot of simulated data. You see academics running in the courtyard trying to create data sets. I know from myself that we make a data set for fire detection, and we simulated fights. But it turns out the simulated fight is not the same thing as a real deal. So there is a lack of proper data sets on the academic side. On the company side, there is a lack of data sets allowed to be used commercially. There's a lot of data out there on the academic side, but from a licensing perspective, companies are not allowed to use them. Some do it anyway, which is a legal risk for users of these systems because of licensing issues. But the companies that do not use these data sets are very limited and have to build up themselves. And in the end, it's big companies that have the resources to create big data sets that can build proper applications. All these smaller ones are either left with bad data sets or forced to use data sets that they are not allowed to use. So by and by helping is a collaboration between academia and companies. I think we can solve both problems. Companies can provide data sets to academia that are from the real world. And at the same time, in this process, we could find a way to make these data sets publicly available. Essentially, these data sets would be under a licensing scheme that helps small companies create proper applications and real-world data without relying on unlicensed data.
Kelly Lawetz: And working with those universities. Could we also develop those standards for testing and certifying the quality and validity of those data sets?
Florian Matusek: Yes, I think this is a topic where we need academia because there are many exciting approaches. On the other hand, we also have to realize that it depends significantly on this kind of data set you're using. Also, biases vary depending on what you do with the data set and its contents. For example, videos are entirely different from transactional data. In your system, you might have names tied to data, but in a video, you don't have names; you see images of people. So this is very domain-specific. But I do believe there need to be rules to verify this. And actually, there is a framework to automatically or semiautomatically do this also coming from Europe. It's called ALTAI, and it's implementing these trustworthy principles into an automatic checklist. So to see so, you can run your application through these checklists and get an assessment about how trustworthy is your A.I?
Kelly Lawetz: Thank you so much for joining us today. We appreciate you taking the time.
Florian Matusek: Take care.
Kelly Lawetz: That was Florian Matusek, group product director of video analytics at Genetec. We hope you enjoy this episode of Engage on Machine Learning and A.I. in the job market. I'm Kelly Lawetz. We'll see you next time.
Engage, a Genetec podcast, is a production of Genetec, Inc. The views expressed by the guests are not necessarily those of Genetec, its partners, or customers for more episodes. Visit our website at genetec.com on your favorite podcasting app or ask your smart speaker to play engage a Genetec podcast.