As technology becomes more advanced each year, organizations need to consider the risks disruptive innovations can raise and whether they’re worth the reward. Is the risk of a data breach or cyberattack worth the improvements in organizational efficiency and even increased revenue?
We invited a panel of experts to our head office in Toronto to debate this topic in front of an audience of information security, cybersecurity, and risk professionals at our #EmergingRisksTO event. At the end of the night, the audience determined the winning team. Read a summary of the debate questions asked or listen to the full recording of the debate and ask yourself: Are you Team Risk or Team Reward? A full transcript of the debate can be found at the end of this article or click here to jump straight to the full transcript.
Topic #1: Cybersecurity concerns should not outweigh the benefits of adopting IoT and third-party vendors
Team Risk
There will always be an inherent vulnerability with IoT; it’s difficult to close this gap 100%. However, it is still the responsibility of those who develop these types of products to protect the infrastructure and find ways to gain visibility into these gaps to prevent attacks from happening.
Companies, like Google, by default are trying to predict behaviour and sell those behavioural outcomes to advertisers, and sometimes, as we have seen in recent years, to aide in experiments that can influence elections. There isn’t enough government oversight into mandating security standards with operational technologies.
Team Reward
IoT is about convenience. One way to look at this is to consider the fact that the population is aging. Take for example, applications like fall detection and health monitoring. There are examples where those devices have literally saved lives, including the Apple Watch which has detected cardiac issues.
Companies are investing into developing technologies that make our lives simpler and easier and can enable people with limited mobility and disabilities to continue to live their lives with a level of dignity without requiring a care worker.
The enabling effects of IoT, when also combined with technologies like AI, start to bring in things like self-driving cars and smart environments, which can be hugely enabling as a matter of human rights for people who don’t otherwise have solutions.
Topic #2: Individuals are ultimately responsible for the personal data that they upload online, as opposed to organizations, governments, and third-party bodies
Team Risk
You can make sure that you don’t share online and you don’t share private information where it shouldn’t and you can keep things protected, but at the end of the day, the devices that you use, if they’re not secure, if they’re not developed with security in mind, can expose all of your most sensitive secrets to the world whether you like it or not.
Sometimes criminals aren’t necessarily targeting you, they’re targeting anybody because everybody has credit information that they can steal. Everybody has identity information that they can make use of and create secondary fake IDs on the basis of that. How else do hackers find out what your mother’s maiden name is? They surveil you and one great way to do that is remotely. Think about that. We could do all the right things, but if the devices that we use are not adequately secure, you’re still at risk.
Team Reward
There’s certainly something to be said about personal responsibility and how we share data. There are examples where people do overshare online and that’s one of the bigger concerns in terms of the data that is available. If we have a cautious approach and share the data that is necessary to gain the benefits of whatever system we’re participating in or introducing that data into, the value is there.
There’s a little bit of hypocrisy that we should probably acknowledge around what society has encouraged us to do and the way that we share information online across all forms of social media and elsewhere. In other areas, we generally make people somewhat personally responsible for themselves whether in finances or in how they deal with their motor vehicles or anything else.
People are not privacy and security experts, so there’s common sense that people engage in best practices, and corporations can help with that. Beyond that, there’s the fault that shouldn’t be attributed just to technology companies, but rather to the legal system. Technology has evolved faster than we as humans can keep up with it. We’re just catching up to understand the value of our privacy in what we put out there.
Topic #3: Why is the adoption of machine learning and AI rapid in tech giants, but lagging from a commercial perspective?
Team Risk
When you’re looking to train a neural network, you have to have a really good idea of what you want to train it to do, to find, and then find all sort of permutations and combinations of that. The training process could be one month, two months, three months per use case. This is one of the reasons for a lack of adoption of AI. It takes quite a bit to sort of train traditional approaches.
Most of the time, just simple orchestration and automation is what’s required, not necessarily an AI-driven automation process. It does have its places in terms of looking for patterns, identifying patterns within an unstructured data set. But there is still a need to have a human to label it afterwards, to find all other permutations and combinations.
Computers don’t accommodate for the nuance of human living and experience. You can teach a system to understand. You can teach it to recognize things, but we do not have AI that comes even remotely close to understanding anything of our world. Until we do, there’s always going to be those risks that we must manage.
Team Reward
AI is being used successfully by a lot of companies that are worth many billions of dollars and many of them would attribute their many billions of dollars to use of AI being in large part, a lot of the tech giants, Uber, Tesla, Google, Amazon, etc.
Beyond that, besides theories of whether or not it’s effective, companies that are not tech start-ups or tech giants and digital natives, are frequently not data first. So, banks have not historically been data first. They do not have the data and the ability to just go and use AI. You need to be a first-mover. Generally, it is the digital natives who their entire M.O., their entire business strategies, their decisions around what products and services they’re selling are around data analysis. That’s the main reason why you see AI primarily used at tech companies and tech giants.
Topic #4: What are the risks and rewards that organizations need to consider when it comes to enterprise blockchain technology?
Team Risk
Companies are integrating blockchain blindly and introducing errors into the consensus algorithm, introducing problems in the way that the network is designed so that, once again, consensus can be gamed. There’s not enough thought being put into why blockchain? Why can’t we just use other simple proven approaches? And when we implement blockchain because we think it’s going to do a better job, are we really thinking about the risks? Have we done enough research into how the system can be effectively gamed and have we really mitigated those risks? Organizations need to ask themselves if they can do what needs to be done with another proven, secure method.
People have been gaming everything in blockchain already, so we can foresee those risks. The exchanges have been getting hacked. The wallets have had their problems. There’s been a total lack of governance in this space and that’s been exacerbated by media.
Team Reward
Society has come around to become a lot more open to blockchain technologies and some of its applications, blockchain being the foundational technology underlying these other applications.
We see even beyond risk-taking start-ups, we see very large companies and big enterprise employing blockchain and experimenting with blockchain these days. We see blockchain being used in, frankly, a lot of boring and probably future effective ways these days in just creating internal efficiencies. Blockchain is a decentralized, intermediary, eliminating encrypted technology that banks are currently experimenting with when it comes to back-office transactions. But blockchain is a bad idea for companies that don’t need blockchain. Blockchain should not be just employed in every single company.
Topic #5: Innovative technology and its benefits should trump the right to personal privacy
Team Risk
If people have such a cavalier attitude and approach towards privacy, you’re going to find the next 10 years really interesting when privacy becomes an archaic term used in the distant past. When corporations win, and they can use your information and sell it 50 ways to Sunday, you might look back and realize, “Oh, that’s why I should have been more concerned about personal privacy.”
If you value your freedom, you should think about the implications to our democracy at the end of the day. If many of us in society can be convinced to give up our privacy, that is going to have an effect on lawmakers and the way that they develop policy and the bills that are brought forth to government and the laws that are then passed. If the majority of us do not value privacy, eventually that is going to be making its way into weaker laws.
Team Reward
If we value privacy, we value freedom. If you are willing to give up your privacy, then that’s for you to decide. It seems like we are willing to give up our privacy for the benefits that the technologies deliver to us. If people are willing to give up their privacy for the ability to connect with others, for the ability to reach more people than ever, for the ability to organize and assemble with like-minded people, that’s up to us to decide. You can’t say that you value freedom and then tell people what that means to them.
Allowing technology to innovate isn’t going to necessarily further enable a corporation’s ability to sell your information. Enabling more technological innovation can also enable an individual’s ownership of their own information and ability to sell that directly if they want to because those technologies are already being created.
Thank you to all our attendees and speakers!
Want to attend the next debate? Click here to learn more about our events.
Interested in learning more about how Resolver can help your organization mitigate risks that arise from new technologies? Take a Guided Tour of our ERM Software now.
About our Speakers
John Daniele, Vice President, Consulting Services, Cybersecurity, CGI
John Daniele is a cybersecurity professional with over 20 years of consulting experience. John has supported clients in both the private and public sector including banking, resources, law enforcement and defense organizations. Currently serving as VP of Consulting Services, Cybersecurity, at CGI, John works with clients to augment and advance the maturity of their cybersecurity operations, to deliver more strategic insights and intelligence to senior business leaders on credible threats.
Simon Clift, Co-founder, Achray Capital Corporation
Simon Clift is a Co-Founder of Achray Capital Corporation, a trading advisor firm for the RJOASIS managed futures platform in Chicago. His 35 years of software engineering and implementation experience have been applied to algorithmic trading, risk management and derivative pricing, for buy and sell side institutions in Switzerland, France and Canada.
Fern Karsh, General Counsel & Director, Blockchain and Cryptoassets, Catalystic AI
Fern Karsh is a consultant and lawyer focused on blockchain, crypto assets and frontier technologies. She serves as General Counsel and blockchain technologies lead at Catalystic AI, an artificial intelligence and blockchain consultancy, incubator, angel investor and AI Academy, that takes start-ups, scale-ups and large enterprise to the next frontier.
Artem Sherman, Systems Investigations Supervisor, TJX
With a background in surveillance, internal investigations and loss prevention, Artem currently works as the Systems Investigations Supervisor at TJX, where he is responsible for the support and enhancement of all systems related to loss prevention, as well as the development of new systems and solutions to support investigations, operations and corporate loss prevention.
Take a guided tour of Resolver’s ERM software.
Full Transcript of Event
Geoff: I want to welcome everyone to Resolver’s offices and to the event. Really appreciate everyone coming out while it’s this cold and with all the snow as well. I’m Geoff Broad, so it’s really great to meet everyone. I’m a sales manager here at Resolver. I’m just going to kick things off and then hand it off to Peter Nguyen.
For those of you who are less familiar with Resolver, Resolver’s a technology company that strives to bring together different business units within your organization to really streamline processes and bring together all that information. Risk, compliance, audit, InfoSec, cybersecurity, bringing all of that information together in one area, one platform, allows you to streamline, but really also allows you to know what’s going on in your organization.
Especially if a risk event happens or any kind of event, the left hand knowing what the right hand is doing and vice versa, you can actually prevent things from happening, mitigate them, and reduce that impact within your organization. That’s overall what Resolver does. If anyone wants to know more information, there’s a lot of people around in the background if you want to raise your hands for everyone that works at Resolver, feel free to talk to them. They all know a lot about it, so ask them a lot of questions.
This is the second event that we’ve had here. We’ve tagged it Emerging Risks Event. The next one we’re actually doing in London, England, but this is the second one in our office. Really the purpose of it is to bring similar professionals together in similar industries or also in different industries, but having does allow a networking opportunity and really just to socialize with some of your counterparts.
There are a few things I do need say. For this Emerging Risk Event, we decided that instead of doing a panel, we’re doing more of a debate. Are you can see, it’s Team Risk versus Team Reward. At the end of it, we’re actually going to use one of our technologies called Resolver Ballot to just do a quick vote and to see, and you guys get to choose who actually wins this debate. Also, don’t forget to put your business card in the bowl at the front so you can win one our swag bags.
I’m just going to hand it off to Peter Nguyen. Peter Nguyen is our General Counsel and Corporate Secretary of Resolver.
Peter: Thanks, Geoff. I’m very excited to be moderating tonight’s debate because it brings me back to my high school days when I was part of the Debate Club. It’s nice to be a lawyer, not atypical. Very nice to try something very different tonight. I think we see a lot of panel discussions going on in this part of the city, but to get two different perspectives on some very interesting topics, will be very interesting to see. Secondly, and probably more importantly, I get to use my gavel that was gifted to me by our CTO this past Christmas. Hopefully, I won’t have to call points of order and we’ll have a very civilized, yet hopefully, interesting debate.
I’m going to introduce our panel tonight before I go over the rules. I don’t think we have any walk-up music, but first for Team Risk we have Simon Clift and John Daniele. Gentlemen, why don’t you come take your seats. Simon is noted the co-founder of Achray Capital Corp and John Daniele, VP, Cybersecurity at CGI. Representing Team Reward, Fern Karsh and Artem Sherman from Catalystic AI and TJX respectively.
Before we go over the rules, I’m going to ask our panel to introduce themselves, let them know a bit about who they are. Simon, why don’t we start with you?
Simon: Hi, thank you, Peter. I’ve worked about 20 years in the financial industry usually in the mathematical side of the risk equation but also dealing with a lot of sensitive intellectual property, which has made me very sensitive to the various IT risks that we’re going to look at tonight. I’ve got a Ph.D. from the University of Waterloo, the David R. Cheriton School of Computer Science and I specialize in basically, again, the sort of equations that arise when you’re making risk or option calculations.
John: My name’s John Daniele. I am leading the cybersecurity practice at CGI. The early part of my career I worked for our nation’s spy agency, the Communications Securities Establishment, tracking down threat actors and bad guys that were trying to break into critical infrastructure and things like that. I now do the same sort of thing in the corporate sector, helping to track down malicious threat actors, monitor their activity, and report to our clients on how to mitigate the risks associated with that sort of nasty business.
Fern: My name is Fern Karsh, and I’m a lawyer. So, my background is legal and regulatory compliance. I started my career at Lang Michener. I’ve been a lawyer for a little over a decade. I previously worked in financial services and wealth management. I spent a lot of that time internally, in-house as a general counsel responsible for legal and compliance for funds, mutual funds, hedge funds, pivoted to blockchain and cryptocurrencies. Helped create some crypto, some of the first regulated crypto asset funds in Canada and advised other types of entities in that space, so blockchain companies and MSBs.
Basically, had a consulting practice for a little while and then went in-house. Now, I’m at Catalystic AI, which is a technology investor, accelerator, and strategy and consulting firm largely focused on AI and blockchain and crypto assets. I’m responsible for internal legal and external business consulting, regulatory consulting, and supporting our team internally.
Artem: All right, I’m Artem Sherman. I’ve been in the security industry for about 16 years, started as a private investigator. Worked in retail loss prevention in multiple retailers. Whole run for HBC and now TJX. Worked in a few different capacities. Now, I’m responsible for delivering all of the analytics and systems and solutions to identify and detect fraud and theft and other types of losses.
Peter: Great. Sounds like we have a very great panel that will probably have a lot of interesting things to say. We’re going to go over the debate rules right now. It’s not going to be traditional parliamentary-style debating, but what we will have is each team will have the chance to provide certain arguments in respect of certain questions that we’ll be presenting on the screen. Team Risk, obviously, will be talking about the risks to an organization about implementing certain technological innovations whereas Team Reward will obviously talk about the benefits. Each team will have about a couple of minutes each to talk. We’ll switch. Each speaker from each team will have a chance to speak and rebut the other side.
With that, why don’t we get going? Sorry, and at the end, we’ll have a chance, obviously, for Q&A from the audience on any of the topics. So, here is the first topic. Discuss. Cybersecurity concerns should not outweigh the benefits of adopting the Internet of Things and third-party vendors. I will start with Team Risk, John or Simon.
John: I think certainly when we’re dealing with IoT, there’s a lot of interesting complications that come into mind. Memory constraints, resourcing constraints for these devices make it extremely difficult for you to deploy any kind of mitigations or controls or good encryption. There’s always going to be an inherent vulnerability with IoT. The risk could never be close to 100%, however, it is still the responsibility of those who develop these products, who manage and run environments that incorporate IoT cameras or operational technology in an industrial setting, still the responsibility of people who manage and run this stuff to protect that infrastructure and find some way to gain visibility and stop attacks that are happening.
These are devices that can be a point of entry into your system. From there, somebody could break into backend databases and steal a lot of your corporate information or expose private information. It’s inherently difficult to find some way to 100% close this gap. I don’t think that gap will ever be closed. Anyway, those are just some thoughts that I had about IoT and its place in the world today and whether it could even be addressed from a security point of view.
Peter: Team Reward?
Artem: Sure. I think IoT is about convenience and if we’re conscious of the fact that the population is aging and we’re investing into developing all of these technologies that make our lives simpler and easier and enable people with limited mobility, disabilities, to continue to live their lives with a level of dignity with not having to have a care worker, for example, that would take care of them all the time, I would say investing in that, even though it’s early on and some of the technologies seem like they’re not that efficient … For example, I could turn on a light instead of saying, “Okay, Google. Turn on the living room light.” We’re not necessarily the full-target audience for the benefits of IoT and us sticking to it and continuing to invest and evolve these technologies, will ultimately benefit the groups that will reap the rewards of that.
Peter: Simon, do you have any thoughts on what’s been said so far?
Simon: In the retail space, I would not allow one of these devices in my house, to be honest. If you look at the business model, companies like Google by default, what are they trying to do? They’re trying to predict behavior and then sell those behavioral outcomes initially to advertisers, but as we have seen in the last few years, those behavioral outcomes have been the subject of experiments and some of those experiments, again, may have contributed to the outcome of the U.S. election. That’s still under investigation.
That is how IoT and what IoT is doing for many of these vendors by default, that’s before the company in question decides that it’s no longer going to support your device and suddenly 10 GB or many terabytes of private information suddenly turn up on a random Amazon word server. That’s a rated case for sure, but again, by default, what these devices are doing is attempting to shape behavior and sell that to a vendor.
John: I think it’s also just embarrassing that a wifi-connected light bulb could be the thing that takes down your company. It’s ridiculous to think about, but it could happen. Or, that wifi-connected refrigerator in the Resolver office or what have you. At the end of the day, I go back to my original arguments, resource constraints. There’s sometimes not enough memory in these devices to detect intrusions against that device. We haven’t really come up with a really good mesh-based approach to offloading those security functions to some other cloud-based server that will be responsible for that sort of analytics. We don’t have that today.
Caveat emptor. You use this stuff at your own risk including the wifi-connected light bulbs that will take down your company. I can only tell you how many times a printer has been the cause of penetration that one of my teams has gone into to do. Sometimes we test the security of systems to help organizations bolster security and if there is a printer that we can access and then pivot off that printer into your environment, we’re going to do that. If there’s a wifi-connected light bulb, we’re going to break into that wifi-connected light bulb probably from the car park across the road. You’ll never see us coming.
Simon: IoT coffee machine. Can you imagine how much information you can get from one of those? It would be fantastic.
Peter: Team Reward, any kind of last thoughts on this first part of the question?
Fern: In terms of the impact, the size of the potential reward, if we look at IoT from more of an opportunity lens as opposed to just a risk lens, we’ve got a Canadian population a quarter of which is going to be elderly in the next 15 years and a lot of whom are going to be needing a lot of assistance. We’ve got millions of people who are disabled. We have increasing numbers of people living past the age of 100 and not only past the age of 80. Those people don’t have great solutions.
In terms of the light bulb and the refrigerator, that’s really where IoT is probably largely today and where it’s starting. The enabling effects of IoT when also combined with technologies like AI start to bring in things like cars and self-driving cars and not just smart homes, but smart environments, which can be hugely enabling as a matter of human rights for, again, the disabled and older people who don’t otherwise have solutions.
I think the opportunity is a very big one. On the side of whether there’s a solution for the security problems posed by IoT, on the one hand, in terms of the corporate risk, I think a lot of the smart home innovations are in the homes right now. I don’t know how many corporations, and I may be wrong about this, but are enormously at risk on the IoT side of things. When we look at IoT integrations of blockchain, blockchain does present some real potential security benefits on the IoT side in terms of decentralization of IoT devices and ability to identify bad nodes and disable them. I think we painted an overly negative picture of the potential to secure IoT devices.
John: One thing in addition to that is operational technology is fundamentally the same kind of principle that is deployed up there is causing risks, but when it comes down to it, who is going to be responsible for setting standards in this area? Because right now, there’s no stakeholder coming forward. The government doesn’t want to mandate security standards with operational technologies. The devices that go into your car that make it a self-driving car, it’s an example of operational technology that’s been deployed.
Who is going to set the standards? Nobody’s doing it. Industry has a myriad of standards none of which are necessarily interoperable and compatible. I say that you might want to really limit your exposure to this until at least some stakeholders come forward to say, “This is the way forward. This is how we’re going to make sure that everything’s interoperable. This is what the security bus looks like, and these are the standards we all have to follow to make sure that we’re going to develop good, safe equipment that isn’t going to blow up in consumer’s hands.”
Simon: That CAN bus in your car that makes all the bits of the engine talk to bits of the radio, et cetera, et cetera, et cetera, completely unsecured. Anything can get on that bus and talk to anything else.
Artem: I think if we’re talking about risk versus reward in IoT and look at applications like fall detection and look at applications like health monitoring, we’ve all seen examples and heard of examples where those devices have literally saved lives including the Apple Watch in detecting cardiac issues and things like that. I’ve not been able to find a single example in my research of an IoT security vulnerability causing a death.
John: The one thing I can say is that my good, late associate, Barnaby Jones, has probably the one figure that has looked at medical IoT in his short life and found horrific vulnerabilities that would allow somebody to break into pacemakers and what have you. The one thing I can say, however, from my own experience is in the hospital setting. You would think that a hospital would be a really controlled environment in the sense that whatever medical IoT is being used, it’s tightly controlled and audited and looked after, but I’ve been in so many hospitals here in this city where there’s a malware attack propagating across the network. The first thing that I say is, “You better isolate your medical IoT.” And, they’re like, “It’s not really a priority. How is it really going to get on that?”
Well, you know, some of this IoT is running operating systems that could be penetrated and you don’t want the defibrillator in the operating room to go on the fritz when you have an emergency situation. There was at least one incident in the U.S. where operational technology was switched out. It didn’t cause a health concern at that point because they had a backup on the surgery table that they moved back into place, but I think that’s going to become much, much more commonplace until we figure out how to create a good risk control and security framework around this stuff.
Peter: Sorry, John. I’m going to have to cut you off. I’m going to give Team Reward one last chance to make final arguments before we move onto the next topic. Any final thoughts, Fern, Simon? Sorry, Artem.
Fern: On the front of the point you made on how you’re not necessarily completely anti-IoT, but you would wait for standards to emerge and none have emerged to date, the fact of no standards emerging initially in an area is, I think, normal for sort of any area in industry. Technology moves particularly quickly, so if we look at, I think, pretty much any other industry, eventually, standards and standards makers do emerge and they often are global. You got organizations like governmental and non-governmental organizations coming together globally to create standards and to promulgate them.
I would expect that in the IoT space there would be enough interested corporations, people, and researchers, to make that happen.
John: Not enough memory in them.
Peter: All right, thank you. That was a very interesting debate. I think we’re going to move onto our next question. Jen? The next one I think is a particularly interesting one in light of a lot of views that we’re hearing out the U.S. in terms of the 2016 U.S. election. Here’s kind of the statement. Given that personal data is a valuable commodity, individuals, as opposed to organizations, governments, third-party bodies, individuals are ultimately responsible for the personal data that they upload online. We’ll give Team Reward a chance to share their thoughts on that statement.
Artem: Sure. I think there’s certainly something to be said for personal responsibility and how we share data. There are examples where I think we could all agree that people do overshare online and that’s one of the bigger concerns in terms of what data’s available online. I think if we have a cautious approach and share the data that is necessary to gain the benefits of whatever system we’re participating in or introducing that data into, the value is there.
We see that most systems nowadays need data to work. They need personal information to customize our experiences and some of us might not like the perceived privacy breaches around that, but if you look back to 200 years ago, everybody had perfect privacy. You also didn’t know if you’ll survive the next winter. With evolving technologies and increasing lifespans as a result of those technologies and everything that’s available to us, we’ve compromised our privacy, if compromise is really the word, but we’ve traded some of the privacy for quality of life.
If we look today, there’s a lot of emergent technologies where the value’s questionable, but just like the previous point, continuing to invest in it and continuing to take some level of risk is certainly worth the reward if we look over how far we’ve come even over the last 30 years.
Simon: Okay, so going back to the point that many of these devices, the reason we’re sharing this information. My sister, she’s an expert animal trainer. She trains dogs, cats, chickens, sometimes even her co-workers. She’s taught me enough of clicker training that I’ve got a cat that will do tricks. It will sit for its dinner and not take a thumb off when I put its dinner plate down. You get to know the animal a little bit. You find out what behaviors it does that you want, you tag them, you give them little rewards.
In the human space, with our current information sharing, those little rewards are that Pokémon that you’re trying to get and why are you sitting at this café drinking this coconut goji berry spice latte when … Well, you’re sitting there drinking at the café because that Pokémon is at that café and you want to get it. Congratulations. If you played that game, you’ve been part one of the larger training experiments conducted by a Google-incubated company. Are you really deriving so much benefit from that? You are certainly not being paid by Google. They have no contractual obligation to our society beyond the taxes they pay and they’re getting immensely wealthy off this.
John: Just by show of hands, how many people have an iPhone? How many people … Keep your hands up. How many people shut off FaceTime this week? Congratulations, everybody who still has their hand up, whether you like it or not, actors can actually get access to all your phone conversations. I’d recommend this week that you shut off FaceTime until fixes have been put out there.
I just wanted to demonstrate. You can do all the right things. You can make sure that you don’t share online and you don’t share private information where it shouldn’t and you can keep things protected, but at the end of the day, the devices that you use, if they’re not secure, if they’re not developed with security in mind, can expose all of your most sensitive secrets to the world whether you like it or not.
Sometimes criminals aren’t necessarily targeting you, they’re targeting anybody because everybody has credit information that they can steal from. Everybody has identity information that they can make use of and create secondary fake IDs on the basis of that. How else do hackers find out what your mother’s maiden name is? Well, they surveil you and one great way to do that is remotely. Think about that. We could do all the right things, but if the devices that we use are not adequately secure, you’re still screwed.
Simon: Back to the early days of the web, I remember when the first web applications appeared and oh, my God, you could look up somebody’s address. That felt like a violation. What’s my address doing up there?
Peter: In Reward, any thoughts?
Fern: With …
Simon: Was the latte really [crosstalk 00:27:31]?
Fern: Sorry. I got thrown back. I was thinking about the point about the address and that’s true. I agree with Artem’s earlier point around I think there’s a little bit of hypocrisy that we should probably acknowledge around what society has encouraged us to do and the way that we share information online, basically the entirety of our resumes and what we do on a moment-to-moment and day-to-day basis across all forms of social media and elsewhere. In other areas, we generally make people somewhat personally responsible for themselves whether in finances or in how they deal with their motor vehicles or anything else.
I think that those kinds of rules should apply roughly equally in privacy. At the same rate, we need to give people tools for that. People are not privacy and security experts, so there’s the common sense that people engage in best practices, and corporations can help with that. Beyond that, there’s fault that shouldn’t be attributed just to technology companies, but rather to the legal system.
We have a legal system where currently and for a long time now, the Privacy Commissioner of Canada has been complaining about our privacy laws and how they’re too weak and how they’re not enabling with respect to enforcement audits and related. That, again, is sort of another part of the ecosystem that could help protect us if we want to actually encourage technological innovation.
In terms of giving people tools with which to properly protect themselves, it is tough when basically all your data resides in central servers in companies that are major points of attack and vulnerable to attack. Blockchain, again, is a technology … That’s a technology that’s really well suited to personal privacy and to putting privacy back in the hands of people. That is what blockchain technology is …
That is a big part of the M.O. and structure blockchain technologies, which are decentralized so they eliminate the issue of central points of attack, which is a huge vulnerability. The encryption and everything else, we do have some blockchain companies coming out with identity on blockchain and we’ve got companies like Toronto-based Skrumble that puts control of communication into the individual’s hands or the company’s hands and other innovations like that that I think empower people if we empower those technologies.
Artem: I think part of it is also technology has evolved faster than we as humans can keep up with it. We’re just catching up to understanding the value of our privacy in what we put out there. I think it’s fair to assume that if we got back to old examples, we all acknowledge that home security is each one of our responsibility. I don’t think at any point we would say, “Well, someone else is responsible for the security of my home.”
I think it’s just an example of where we just haven’t caught up to fully understand what’s going on with our data and the technologies that we interact with. Ultimately, we will catch up to understand that it’s as valuable as protecting our home. We’ll be more informed and take all of those steps. It’s just far more complicated than that.
Simon: Police in my neighborhood do a pretty good job.
John: I do agree though in the sense that we haven’t taken very many punitive approaches to enforcing things like making sure vendors create secure code, making sure companies actually respect your privacy. If your information is exposed under the current regime, it’s a $100,000 fine per records versus GDPR, which is three percent of your top line revenue. That hurts. $100,000 per unit, that’s just the tax to most large Canadian corporations. They won’t [inaudible 00:31:55].
They’ll be like, “You know what? $100,000 per record equals this amount. I’d probably have to spend almost about that amount securing the system to protect those records, so I might as well just take my chances with the class action lawsuit.” You would be amazed how many corporations make these sort of risk deductions.
Simon: Privacy can be quite serious. There were two people in my extended circles, who, even if their address is revealed, they will be hunted down by nasty governments and killed. It’s quite serious. I got to agree with the GDPR and especially the German approach to privacy is one that we should emulate. Merkel is extremely smart. She has a Ph.D. in quantum chemistry and she grew up in eastern Germany, so understands very viscerally what’s going on in this sphere and I just like what the Germans are doing.
Artem: On the risk versus the reward concept, who here has an iPhone? Raise your hands. And, when you find out about the FaceTime bug, who here put their iPhone in the microwave and fried it? Knowing that there is a security vulnerability, being informed that your data is now at risk and potentially could be at risk from other sources, we all continue to use the iPhone.
Simon: Bad habits.
Artem: I think the reward is worth the risk.
Peter: Gray hair, actually. We’re going to move on to our next topic, which we were starting to touch on in some of Fern’s comments, which is the following. We’ve seen a lot of technology giant companies investing rapidly in machine learning and AI but generally, commercial adoption seems to be lagging behind this. Risk Team, why do you think this is? Why aren’t more companies using machine learning and AI in their products?
Simon: Mostly, it doesn’t work that well. I’ve been at a couple of conferences lately in AI and finance and it was a very small gathering, about this size, with Stephen Boyd and Christopher Re, AI researchers at Stanford. They basically stood in front of this crowd of hedge fund managers and talked for three hours about why you should not use machine learning in your hedge fund and all the different ways that it can go wrong.
A buddy of mine runs the self-driving car projects, Autonomoose, at the University of Waterloo, and I asked him a little while ago, “Are you using AI and machine learning techniques?” He gets a slightly sheepish look on face and shakes his head and says, “No, ’cause we can’t trust them. They’re not transparent. They’re not reliable.” The last NYU researcher that I spent time chatting with says he usually spends an awful lot of his time talking down people’s expectations. Now, if you’re working for these companies, of course, you’re going to talk it up. Your job’s on the line.
John: I think some other considerations is when you’re looking to train a neural network, you have to have a really good idea of what you want to train it to do, to find, and then find all sort of permutations and combinations of that. The training process could be one month, two months, three months per use case. I think that’s one of the reasons for a lack of adoption of AI. It takes quite a bit to sort of train traditional approaches.
Now, we don’t make use of unsupervised learning enough. Unsupervised learning can do a really, really good job at pattern matching. Not pattern matching, but identifying patterns themselves within an unstructured data set. Then, there’s the problem of, “Okay, what’s the real application? Do we really need AI in order to automate this process?” Most of the time, just simple orchestration and automation is what’s required, not necessarily an AI-driven automation process.
It does have its places in terms of looking for patterns, identifying patterns within and unstructured data set. It does exceptionally well. Then, you still have to have a human to label it afterwards, to find all other permutations and combinations. We haven’t used approaches to auto-label the data sets that an AI does eventually find and that’s also a bit of a bottleneck. That’s where I think lack of adoption comes into play.
Simon: It does have an advantage that it can shovel through all of the data. Was it Joseph Stalin that said, “Quantity is a quality all of its own?” I think there’s a bit of that going on.
Peter: Reward?
Fern: AI in the context specifically of hedge funds is a very specific context. There’s lots of other purposes for AI and whoever was giving that presentation may very well be anti-AI-
Simon: He’s the one who just shut down the cancer research project at IBM Austin.
Fern: I’m not familiar with the status of that project.
Simon: Because it made bad choices.
Fern: Okay. Yeah, I’m not familiar with the status of how AI’s being used in the IBM Watson cancer research project. I do know that in general, AI is being used successfully by a lot of companies that are worth many billions of dollars and many of them would attribute their many billions of dollars to use of AI being in large part, a lot of the tech giants, Ubers, Teslas, Googles, Amazons, et cetera. A lot of companies themselves and a lot of the tech startups think that AI works great. Works great.
Having said that, AI does not always work great. I think you touched on the fact that it takes time. So, AI it’s not the mistake of AI that it doesn’t work well immediately. AI’s dependent on having unique data sets and learning from them and having an increase, ongoing amounts of data to be able to learn from. That is more of a medium-term, long-term exercise and there would be politics in companies that aren’t the tech giants and aren’t digital natives around short-termism.
Beyond that, besides theories of whether or not it’s effective, companies that are not tech startups or tech giants and digital natives, are frequently not data first. So, banks have not historically been data first. They do not have the data and the ability to just go and use AI. You need to be a first-mover. Generally, it is the digital natives who their entire M.O., their entire business strategies, their decisions around what products and services they’re selling are around data analysis. That’s the main reason why you see AI primarily used at tech companies and tech giants.
Artem: I totally agree. I think AI works especially in the areas where there’s been a lot of money and a lot of data available to go through. We interact with systems. They utilize AI. Probably dozens of systems, they utilize AI every day in our lives. We know it works in very specific applications where there’s been time and effort and a lot of money invested.
The commercial adoption is lacking for a few reasons and that is now companies are trying to apply it to new things that haven’t been approached before and in areas where there isn’t a lot of data and there isn’t a lot of experience with it. The other part is, there’s a lack of trust. There’s a lot of cynicism about the lack of transparency of AI and the fact that you can’t validate the work, you can’t test that it’s approaching it the same way you would.
There are very, very, good examples even in the medical field where medical errors are one of the top causes of death for people and AI’s able to reduce errors with diagnosis, with prescriptions. It’s able to identify those things much better than doctors sometimes. So, it’s really a matter of adoption and trust and investing the time, money, and effort into it. When we apply it to new fields, it often fails because we simply don’t have a billion dollars to pour into figuring out if my grocery supply chain is operating optimally.
John: I think there’s a lot of tools that have democratized access to AI. TensorFlow is an example. It can download the libraries, apply it, and run with it. The risks that I see with AI have a lot to do with bias. The information that we feed into the system to recognize patterns and make decisions on the basis of that still do include a lot of human biases in that process. Could you imagine insurance companies starting to use AI to set their premiums accordingly?
There’s going to be a lot of horrendous decisions being made before we actually figure out to do AI properly. I think we’re embarking upon a really interesting project where we’re not paying enough attention to the risks, we’re not paying enough attention to creating good controls, we’re just blindly adopting this kind of technology, rolling it out, but when it starts making decisions on our behalf, it’s going to lead us to some interesting places.
The one area where I think we’re going to see a lot of disruption with AI is government policy. We are going to very quickly, in the next five to 10 years, you’re going to have government policy informed by an AI. That will be my prediction for the future. When we do have government policy informed by an AI, we’re going to be living in a very strange world, I think.
Computers don’t accommodate for the nuance of human living and experience and this is going to be really, really interesting when environmental policy is being established by an expert system. I think there’s going to be a lot of missteps along the way and hopefully, those missteps are not catastrophic.
Simon: I heard a theme at the Trudeau AI meeting in November. It was a young lad from Waterloo who pointed out that neural networks, or what we in mathematics call a differential equation, there is actually a way to write these out really nicely. As a mathematician, once you’ve written them out in this really nice way, you can actually characterize them fairly well. This is a theme that’s been around for some 30 years now.
Where AI tends to get a foothold is in low-dimensional problems. If I’m driving a car, a stop sign is always a stop sign. It always means stop. Lines on the road are something you drive between and if there’s a telephone pole, you don’t drive there. It’s a two-dimensional problem with a lot of repetition. That’s where you can extract some patterns.
Now, some poor guy driving his Tesla in Florida suddenly discovered there’s actually a third dimension to the problem, the truck that was spanning the road was not, as it was identified by the AI, it was not a bridge and he paid for that with his life. We’re going to see a lot of mistakes like that.
Artem: I think that’s a question of nuance and we have those gaps in nuance simply because we lack data.
Simon: Very nuanced truck.
Artem: Yeah, but that’s an example of that occurred, it can learn from it, it won’t happen again. You can teach it that this is now a truck and don’t hit it. Whereas-
Simon: My buddies that do this for a living don’t do that.
John: You can, however, teach a system to understand. You can teach it to recognize things, but we do not have AI that comes even remotely close to understanding anything of our world. Until we do, I think there’s always going to be those risks that we have to manage. We’re just doing a really poor job managing that right now from my perspective.
Artem: In the example of a Tesla, they’re far safer than a regular car. With a regular car, you can make the same errors, the driver can continue to make the same errors, and we don’t learn from them. When we implemented seatbelts, people wouldn’t even wear them. This is an example where we could evolve systems and safety and everything else so much more quickly. Errors will occur and at times, those will also be unfortunate, but if you look at the bigger picture, Tesla drivers are far less likely to die than non-Tesla drivers.
Fern: The errors that could occur are not a huge mystery. Bias is a real problem and you’re aware of it because society’s already aware of it. These are foreseeable risks that the industry has awareness of and there are government solutions that can be put into place. You wouldn’t want autonomous AI just being completely autonomous without human intervention initially on the government’s fronts to prevent those bias issues.
John: Except though, an interesting problem from a legal perspective is how do you arrive at a decision tree? Because some of these decisions that an AI will make is so complex it would take hundreds of thousands of pages of information just to basically arrive at a logical path to a particular decision. How is that going to play out in the courtroom when a human person, who’s responsible for this AI, is asked, “Well, how did it arrive at that decision?” There’s no way that a human can answer that question.
Fern: You’re eventually going to have like in any other area, there’s just going to be standards of due diligence. The law and the courts are going to decide what those are and what industry needs to be responsible for in terms of due diligence standards based on what’s possible. If we have transparency and we’re able to get there in terms of AI and open up that black box, then that might be it. If we don’t, then it will be whatever industry standard is set.
John: So, we’re going to trust a balance of probabilities.
Simon: Or, even worse. Have you seen there’s a bit of a thing going on in the industry where they’re taking images that AI’s presumably recognized reliably, they’re changing a small number of pixels, adding a little bit of fuzz, and getting very, very different results? It’s almost too easy. It’s sort of a shooting fish in a barrel exercise to fool these AIs, which are basically, again, they’re just differential equations with a bunch of coefficients in them that have been fit to occur.
The reason I don’t use them in my business, I use a completely different type of machine learning, is they don’t deal with randomness, where I have to deal with randomness.
John: You know those little password … Before you type in your password you have to identify yourself as a person with picking a bunch of images and things like that, I use AI to break that stuff today. [crosstalk 00:47:14] adversarial use of AI is also going to be a brave new world when the bad guys start doing AI-driven phishing campaigns. Phishing campaigns today are already wildly successful.
There’s millions of dollars being siphoned out of corporate coffers right now because of bad guys who just basically game the procurement process in a company. Imagine if they have an AI that can just do that all on their own. You just push a button, the thing will have a conversation, and extract even more millions out of companies. It’s going to be an interesting future.
Fern: Also, AI-
Peter: Sorry. Sorry, Fern. We’re going to cut it off. I’m sorry. Obviously, very interesting discussions. Obviously, some potentially very scary outcomes given John’s comments. I think we’re going to move on to our next statement. Enterprise blockchain technology, what are the risks and rewards that organizations need to consider? Fern, you’re the expert, why don’t you share some thoughts.
Fern: Blockchain technology’s gotten a bit of a bad wrap historically in being associated with cryptocurrencies which were originally invented … The original blockchain invention was Bitcoin in 2008 by the mysterious Satoshi Nakamoto who nobody knows the identity of. Ultimately, blockchain technologies, I think society has come around to being a lot more open to blockchain technologies these days and some of its applications, blockchain being the foundational technology underlying these other applications.
We see even beyond risk-taking startups, we see very large companies and big enterprise employing blockchain and experimenting with blockchain these days. We see JP Morgan Chase, notwithstanding its CEO’s comments about certain cryptocurrencies, they’ve created their own blockchain. So, they’re in the space. We’ve got hedge fund managers in that space. We’ve got Fidelity that’s created custodial services for cryptocurrencies.
We have Congress saying and writing reports about how blockchain is the way forward for security. Again, going back to some earlier comments around centralized versus decentralized technologies, and that combined with encryption and other aspects of technology, creating a lot of opportunity for security and privacy. We see blockchain being used in, frankly, a lot of boring and probably future effective ways these days in just creating internal efficiencies.
Blockchain is a decentralized, intermediary, eliminating encrypted technology that banks are currently experimenting with when it comes to back-office transactions. There’s a many multi-billion dollar, like an 80 or so billion-dollar industry, in back-office costs that banks have and blockchain is a huge efficiency driver and saver there. We see similar things going in supply chain and healthcare and across all kinds of industries where any of security, privacy, and the generation of internal efficiencies and cost savings are a value.
John: I think blockchain is a really great way to get a higher valuation if you’re a startup. At the end of the day, we haven’t really found really, really good uses for it. Sorry, anything with a ledger.
You can implement the ledger with blockchain, but what I also see is people integrating it blindly and introducing errors into the consensus algorithm, introducing problems in the way that the network is designed so that, once again, consensus can be gamed. There’s not enough thought being put into, number one, why blockchain? Why can’t we just use other simple proven approaches? And, number three, when we do implement blockchain because we think it’s going to do a better job, it’s going to be more secure, are we really thinking about the risks? Have we done enough research into how the system can be effectively gamed and have we really mitigated those risks?
Smart contracts with Ethereum is probably even better than just simple use of blockchain, but smart contracts, I think, have a much wider possibility for introducing themselves into a whole wide variety of business processes and automating and what have you. Even then, Ethereum is significantly got come big challenges right now from a security point of view to the point where a lot of people are starting to pull out.
I just think that you have to ask yourself, can I do what I need to do with another proven, secure method? Do I really need to resort to using the blockchain as an alternative?
Simon: Yeah, my favorite question is, I asked the blockchain entrepreneur in front of me, what would happen if you just used GitHub to do this? Maintained a cryptographically signed ledger, put the previous commit number inside your current commit, boom, you’ve got an audit trail. Make people sign in with their secure shell keys, boom, cryptographically secured identity for the updates. Put a couple of hooks in the Git repository so that it doesn’t accept until everybody’s synchronized.
A lot of the blockchain applications, you could just basically, you could just put them on GitHub and go out for a beer after lunch ’cause you’re finished. I used blockchain in that sense in my company to produce an audit chain of every one of my financial transactions. Every one of the recommendations that comes out of my software is secure and auditable. It’s this much source code to do it. It’s really easy. I think the biggest risk here is you’re just not doing anything that interesting.
Fern: Blockchain shouldn’t be … Blockchain is a bad idea for companies that don’t need blockchain. Blockchain should not be just employed in every single company. As for having valuation boost, it probably did in 2017 and then it also got those same companies in front of the FCC. I don’t think that many people are going to be using that technique go forward other than maybe other-
John: IBM right now is trying to push blockchain as a means of validating identity and deploying it with the government setting to protect our financial regulatory systems for filings and things like that. They’ve got a whitepaper out there. It’s really interesting. I found it super fascinating. I think it’s a really interesting concept because you’ve got multiple different organizations that could also validate in a consensus-based approach this individual’s identity.
Just wait until somebody finds a way to game the consensus algorithm because it will happen. Could you imagine if we’ve gone way down that discussion and now you’ve got a passport that has its numbers validated by a blockchain and there’s a fundamental flaw in that consensus algorithm? That’s going to be disastrous.
Fern: The people have been gaming everything in blockchain already, so we can foresee those risks. The exchanges have been getting hacked. The wallets have had their problems. There’s been a total lack of governance in this space and that’s been exacerbated by media. We’re generally aware of governance issues. We’re aware of the potential to game consensus mechanisms. Is that something you’d be particularly concerned about in a permissions environment? Do you think people would be gaming …
John: Absolutely, if it has to do with being tied to physical identity, like IBM is trying to push right now, yeah, I’d be concerned. I think whenever you have a process whereby three people have to agree before that process is then considered to be a high degree of assurance, let’s say. You have to have x-number of consensus checks before the next part of the process is putting forward. I think when we start attaching that to really critical functions, we need to take a really good, hard look at what are the negative outcomes and have we really accommodated for those security risks.
Fern: [inaudible 00:55:37] that sounds a bit theoretical though, so how exactly is that going to be gamed? If you’ve got distributed nodes and you’ve got tons of them around the world and if you know where the majority sit and, if on top of that, if there are issues with that and then let’s say you’ve got a third-party service provider to manage that, how then are you gaming that system?
John: It becomes significantly easier to game when you have a more massive system and you can … This is one vulnerability that may have been accommodated in recent history, but the more nodes I could necessarily influence, that’s one way of sort of tipping the balance one way or another.
Simon: Again, through IoT.
John: As much as that may or may not be as effective as an attack today, and it certainly has been used in recent history, similar approaches will be found. That’s indelible. There will be more security vulnerabilities identified, but have we really thought through what happens then? What are the controls that we deploy to catch those problems?
Fern: I think the fact that there isn’t just the proof of work consensus mechanism but that there is about 10 others goes to show that we have thought through these issues.
John: No standards.
Fern: Did you say no standards?
Artem: Two to seven.
John: No standards. We got 10 different consensus algorithms, so there’s not really much standardization in the industry.
Artem: Isn’t that the case with any emergent technology? I think this is just a question of this is something new that we’re still learning and over time, we’re going to harden it.
John: Once upon a time, we had RFCs. Before technology would be widely deployed, before a new protocol was going to be out there, engineers had to spend a good deal of time thinking through these problems in creating an RFC. We don’t do that anymore.
Artem: Is the issue with blockchain technology or the approach to deploying it? ‘Cause those are two different things and the value of blockchain stands on the merit of that theory as opposed to how some people implement it poorly.
John: Well, there’s only one really good use for blockchain which is if you have a ledger. You can secure it better, theoretically. I would argue that there’s traditional methods that secure it better for most applications than necessarily exposing a very secure ledger to the potential risks of a blockchain.
Simon: That’s an interesting one. A vendor from whom I get data decided to change the terms of service under which I got the data. I didn’t really like those. He was all, “Here, you can have a free year of our alternate data service.” Great. So, I looked at the contract and the last clause under termination required me to remove all copies of the data from the system. Well, hang on a minute. I need copies of that data in my audit chain.
If I terminate my contract with this vendor or if the vendor decided to terminate the contract with me, I would basically have to go back and delete all my business data. Now that I’ve got this information, and it’s secured cryptographically signed in my ledger, what happens when somebody comes back to you and says, “Hey, you can’t have that piece of information?” There’s some noxious things in the Bitcoin blockchain, some links to illegal dark websites that people put in there as comments. How do you remove those from the blockchain?
John: That’s sort of like the public. Most people deploying blockchain are deploying private blockchains, not necessarily exposing themselves to public blockchains. Although, if anybody’s actually doing that, it certainly applies. There’s huge risks in moving your ledger to the public blockchain.
Simon: That’s true, but, again, this one vendor, if I had signed that contract, had basically the right to shut me down because I use this sort of a blockchain-style mechanism within my [inaudible 00:59:28].
Peter: All right, sorry. I’m going to have to move on to our last topic ’cause we’re kind of running up on our time. Coming back to the topic of privacy, Team Risk, innovative technology and its benefits should trump the right to personal privacy. Thoughts?
John: Sorry, just restate that one more time.
Peter: Sorry, innovative technology and its benefits should trump the right to personal privacy.
Simon: Again, if you like being trained by large commercial companies for their fun and profit, receiving roughly the equivalent of a cat treat in return, then go for it.
John: Yeah, what he said. Do I necessarily need to say anything more? Really? The benefits will trump personal privacy. I don’t really buy the argument I have nothing to hide. At the end of the day, your identity is valuable to some criminal threat actor.
Simon: My teenage daughter gets by just fine without a phone.
John: At the end of the day, there’s value to information that you may not even subscribe value yourself. I think that if you have such a cavalier attitude and approach towards privacy, you’re going to find the next 10 years really interesting when privacy becomes something that was an archaic term used in the distant past. When corporations win and they can use your information and sell it 50 ways to Sunday, I think then, you’re going to look back and realize, “Oh, that’s why I should have been more concerned about personal privacy.”
Artem: I think there’s more options than this and it’s an interesting proposition. If we value privacy, we value freedom. If I’m willing to give up my privacy, then that’s for me to decide. It seems like we are willing to give up our privacy for the benefits that the technologies deliver to us.
Simon: Pokémon’s.
Artem: On the one hand … What’s that?
Simon: Pokémon’s.
Artem: Sure. If we value freedom and individual choice, which not all societies do, but for example, this one does, that is up to the individual to decide. If they are willing to give up their privacy for Pokémon or any other video game or for the ability to connect with others, for the ability to reach more people than we ever could, for the ability to organize and assemble with like-minded people, that’s up to us to decide. You can’t say that you value freedom and then tell people what that means to them.
John: If you value your freedom, you should think about the implications to our democracy at the end of the day. If many of us in society can be convinced to give up our privacy, that is going to have an effect on lawmakers and the way that they develop policy and the bills that are brought forth to government and the laws that are then passed.
At the end of the day, if the majority of us do not value privacy, eventually that is going to be making its way into weaker laws. If you value your freedom, I think you should really consider that implication 10 steps down the road. I think we need many, many more people taking privacy more seriously because otherwise if we don’t and we don’t voice our concerns, the laws that are then going to be weakened down the road are going to devastate your life and devastate your personal freedoms.
Artem: That’s pretty alarmist though because our life has only gotten better. Generation-over-generation, year-over-year.
John: Just take a look at China. China is a country that has a different concept of privacy altogether. The individuals in Chinese society don’t think untoward by looking out for their fellow neighbor. Automatically, it’s a different kind of perception and view on privacy. As a result, there was less reticence to the government actually prying into the private lives of Chinese citizens.
What we have now is a social credit system that games your private information. So, the decisions that you make are then being calculated into a formula and a score is being ascribed to you. That score is then dictating whether your children get to go to the best schools, the best universities, or whether they are condemned to a life of poverty. I don’t think it’s alarmist at all. We can take a look at other societies around the world and see exactly where a lack of emphasis on privacy leads.
Fern: Allowing technology to innovate isn’t going to necessarily further enable corporations ability to sell your information.
John: But, it has.
Fern: Enabling more technological innovation can also enable an individual’s ownership of their own information and ability to sell that directly if they want to because those technologies are already being created [crosstalk 01:04:53]-
John: I recommend reading the terms and conditions on half of the cloud services that you subscribe to because that is not the case today.
Fern: Not the case for, sorry, for cloud, for normal cloud technologies. There are blockchain technologies that … And, there’s blockchain medical. There’s blockchain medical companies coming out these days. I think there may be one that’s in partnership with IBM, but, in any event, that have the very goal of putting ownership of data in the hands of the individual, medical data as one example. If can you do that in that space, you can do that in any space.
Privacy is not the only value that we have as a society and not the only value that we come up against with or against technology. Innovative technologies do a lot of good for society and specifically big data which is, I think, what we’re talking about.
John: I just think we are coming to an inflection point. I think we are moving so fast with the implementation of every new development, we’re not paying enough attention to where we’re headed.
Simon: We don’t have the terms to talk about this. We’re used to thinking of invasion of privacy in Cold War terms. It was something that the evil states did. That information flowing from your Fitbit, that’s called medical records. It used to be that if you wanted to put a recording device on someone’s kitchen table, that was a called a bug. You needed a warrant. You had to be law enforcement.
If you wanted to put a tracking device on someone and track their location 24 hours a day, they had to be criminally convicted and you had to get a court order. We’re talking about these with terms of vocabulary that goes back to the restrictions that we’ve put on our state, but we haven’t developed the language to talk about the restrictions that we need to put on companies that are doing all of these things that we wouldn’t permit a state to do. Why is that okay?
Artem: I don’t think some of those parallels work necessarily. If we look at, I’m going to go back to China, they were never in a position like us. They never volunteered to be part of an oppressive regime and society. They were always that way. To argue that we will evolve into that type of regime is not realistic because that has not occurred and China doesn’t compare that way.
We are still a free society. We choose to put those listening devices in our homes. We fully consent to it. We understand it’s a microphone. We understand it’s a camera. We understand it’s web-connected, and when laws start to change, with have the ability to elect people that will change them back or that will influence them in a way that want and that’s freedom.
Fern: These alarmist fears regarding privacy laws and privacy laws moving in a direction consistent with people not caring about privacy is not playing out in society. GDPR is brand new. The direction we’re moving in is much, much, much, much stricter and more stringent privacy rules.
Peter: Sorry, I’m going to have to end it here and leave Team Reward to have the final word. I’m sorry, John. Thank you very much for our panelists for a very interesting debate. I think they raised some very excellent points. I think we’re going to open it up to the floor now for the panelists to answer any questions anyone on any of the topics that you may have to ask any of our four experts up here. I guess we’ll pass the mic around. Any questions? Sorry, I’ll just run this [inaudible 01:08:40] over here. Sorry.
Audience Member: Thank you for your views. They are extremely refreshing. This question’s for anyone on the panel. The resounding theme in the conversations which were had was the need for standards and a standard-setting authority. Who would you say is that organization? We’ve seen the Eurozone come up with GDPR, but France has their own privacy laws now. For self-driving cars, would Tesla be a good standard-setting body ’cause they might have industry best practices? Are countries the right standard setters? Are organizations the right standard setters? Are trading blocks the right standard setters? Who sets these standards?
Simon: An ETF.
John: There’s a number of international organizations that are working on standards, NIST in the United States. From a security perspective, xthe NIST standard across the board is highly regarded, very robust. How many people are paying attention to that and how many people are conforming to those standards that exist?
Simon: That’s where the NSA’s hacked the elliptic key.
John: Yeah, very few at the end of the day. It’s not so much like in some areas we have no standards-setting bodies and there’s no stakeholder coming forward to say, “Hey, we’re going to put a stake in the ground.” But, in many cases, there are standards that are just simply being ignored as well.
Fern: Depending on which area you’re talking about, I think you’re talking specifically to security and privacy, but more specifically the idea of what type of body would be appropriate, I think if we look at any other space including any of these other areas of technology or just other industries, you would want a broad representation. It wouldn’t be a company or a few companies, you would want broad representations maybe on a global basis or depending on what you’re dealing with, more local basis.
John: There is an international body, it’s a body that’s part of the United Nations called the ITU, the International Telecommunications Union. This is a body that could play a bigger role in setting standards for new and emerging technologies as an example. That would be one thing that I would advocate for, more international participation in standard-setting.
Simon: If you have the patience to deal with the UN.
Audience Member: Really great talk. It was very interesting. This question is for anybody, but I’m really curious to get your take on how workplace training should evolve in order to keep up with technology. For example, someone doesn’t pick up a USB drive they found on the ground of their office and plug it into their computer, which in that case it doesn’t matter if they’re using blockchain because they pretty much have access to anything at that point.
John: I think more scenario-based training is the answer. Right now, we have training that is very one-dimensional in the sense that we’re going to have information fed into your brain and you’re somehow going to absorb it and go away from that training actually learning how to apply these skills. I think right across the board more scenario-based training where you get to immediately apply the skills that you learn in a mock exercise as an example, is probably a far better way of teaching these lessons.
Simon: Also, sorry, to some extent it falls on the IT department to become the denier of certain information services. It behooves the IT department just to make sure that certain things can’t happen. For goodness sakes, lock out the USB systems on any desktop, for instance. Reduce the attack surface.
John: Yeah, that’s a good question. You’re dealing with human behaviors at the end of the day, which are undergoing some fundamental changes. You’ve got to find a way to incentivize workers to take the training seriously. Maybe, the reason why we’re delivering training in that way is probably because it’s the cheapest delivery mechanisms to deliver that en masse. Maybe organizations need to get a little more creative. Maybe have more human participation in the process as opposed to just trying to gamify it on a video game kind of scenario. Yeah, I don’t have any good answers for you in that sense, but this is definitely going to be a challenge as we develop shorter and shorter attention spans overall.
Simon: In my business, we just fire anyone who can’t deal with the sales systems as they are. A friend of mine, I was shocked. He was a rancher from Wyoming and he had something like 20 dogs in his life. How’s that happen? Well, you go downtown to the market. You buy a dog. You toss him in the back of the pickup truck. If it makes it home, great. If it didn’t, it’s not smart enough to be a ranch dog. I think this is a similar principle applies.
John: You’re making me feel really bad ’cause I probably have a few mandatory training courses to take yet.
Peter: All right, I want to thank the panel again. We’re going to wrap up the formal portion. Thank you again for your comments. Please, vote. There’s the website right behind the panel. Please, vote for the winning team. I think everyone’s going to be sticking around for our Q&A. Informally, please, there’s still some drinks and some food so please enjoy yourselves. Thank you again for coming tonight.