Governance, Risk and Compliance
By Diana Buccella Modified February 7, 2021
As we enter a new decade, disruptive technologies promise new solutions, further innovation, and new ways to connect with our customers and markets. But as with any opportunity for growth, they also bring with them risks that businesses should start considering now.
The digital economy and its disruptive technologies aren’t going anywhere. According to 2019 research released by IDC, more than 60 percent of Global GDP will be digitized by 2022. And, IDC predicts that nearly $7 trillion in investments will be injected into the IT sector between 2019 and 2022, driving digitally enhanced offerings that will deliver growth in every industry.
With projected growth like that, it’s critical that organizations understand the risks presented by these up and coming technologies so that professionals in risk management, corporate security, and information security can begin to build strategies and initiatives to prepare their businesses to embrace and make the most of these technologies.
Consumers and regulators expect companies to identify, address, and mitigate risks surrounding the privacy and protection of consumer data in the cloud. While this means keeping abreast of current and upcoming data protection legislation like GDPR in Europe and HIPAA in US healthcare, compliance and risk mitigation measures need to extend beyond simply complying with legislation. Consumers increasingly want companies to be fully transparent about where their data goes, who sees it, and what will be done with it.
IBM’s 2019 Cost of a Data Breach Report found that the average cost of a data breach is $3.92 million and affects some 25,000 records. The same report goes on to determine that, while the average lifespan of a breach is 314 days, containing the breach to 200 days or less results in significant savings to organizations, i.e., some $1.2 million. Despite the best controls, data breaches do happen, but some measures that have proven helpful in containing their costs have been incident response teams and the use of encryption, according to research conducted by IBM Security.
Concerns about consumer privacy reached new heights in 2018, as a number of companies suffered high-profile data breaches that compromised their customers’ private information. Notably, it was reported that Marriott International Inc. faced a breach that exposed the personal information of 500 million customers. Through the attack, the hackers were able to gain access to names, phone numbers, emails, passport numbers, travel details and payment information of customers. With breaches like this on the rise, companies are facing the difficulty of gathering this data without violating their users’ privacy or exposing their personal information to malicious actors.
Machine learning continues to be an exciting form of disruptive technology for many applications because it offers the possibility of removing human biases from the equation when making important judgments and decisions. However, this is only effective if the dataset and model are themselves free of bias.
In October 2018, for example, the news broke that Amazon had reportedly scrapped a machine learning tool for selecting the top resumes among its job candidates, because the system discriminated against women. The bias was apparently due to the fact that the tool was trained on a dataset of resumes from previous applicants, who were predominantly male.
Microsoft’s “Tay” chatbot offers a cautionary tale of an AI system gone rogue, creating a major embarrassment for the company. Tay “learned” from her interactions with Twitter users, some of whom “taught” her to make extremist and bigoted statements. The bot was quickly shut down after only a single day on the platform.
While the backlash from Tay was relatively mild, AIs left to run unchecked can represent a major and even existential risk to your business’s reputation and bottom line.
Artificial Intelligence systems that are prone to errors, subject to bias, or easily hacked can expose your organization to public criticism, as the government of Australia recently discovered when they implemented an algorithm that was designed to detect welfare fraud. Flaws in the algorithm caused thousands of welfare recipients to receive false debt notices. This eventually led to a large public outcry and an official investigation by the Australian Senate.
Impostors can create a malicious chatbot with the branding of a legitimate business and place it in an app store where unsuspecting customers of that brand to seek help and download the chatbot. From there, they have a direct line to that consumer and all the sensitive information and personally identifiable information (PII) that they’re willing to request to fraudulently help the client with their real query. To make matters more complicated, hackers may not even require consumers to download an app to encounter a spoofed chatbot. Through malware, they may be able to place their spoofed chatbot right on the company’s legitimate website.
As AI systems become more intelligent and gain more agency, addressing the ethical and legal issues of these disruptive innovations will be a preeminent concern. For example, companies that are researching self-driving cars must deal with their own versions of philosophical dilemmas such as the trolley problem. When an accident is inevitable, is it acceptable for a self-driving car to divert its course in order to save more people if that puts its passengers’ lives at risk? Whose lives should be prioritized – the car’s passengers or the pedestrians outside the vehicle?
Ericsson forecasts that there will be 29 billion devices connected to the Internet of Things by 2022, from smartphones and GPS devices to “smart” thermostats and toasters. This massive IoT growth offers billions of new attack vectors for malicious actors.
Businesses need to make sure that their IoT-connected devices are safe, with no default passwords and with all security updates installed.
Data breaches of customers’ personal and financial data are devastating enough, but the repercussions are limited to the individual. What happens when attackers are able to breach an IoT network that manages public infrastructure? From hacking traffic lights to bringing down power plants, the possibilities are extensive, and the risks are severe.
When the IoT is applied to infrastructure such as electrical grids, they must be protected with both physical security and cyber security. The massive Northeast blackout of 2003, which affected more than 50 million people, offers a picture of what could happen in a worst-case scenario during an IoT attack.
Although the risks in technologies like cloud-based storage, chatbots, and machine learning are substantial, today’s risk management and compliance functions have ways and tools to address them.
When identifying and addressing cloud-based data risks, the solution generally requires a strong monitoring and oversight function to keep abreast of current and coming legislation. It also involves meaningful investments in InfoSec tools and technologies that provide your organization with protection against data breaches.
To avoid bias in machine learning, the solution is generally to ensure that the input data you feed to your AI is as accurate and free of prejudice as possible. Although this solution seems simple, it can be hard to implement in practice; the methodological problems that data gathering algorithms can suffer from can be very hard to spot. When in doubt, more data is usually better, although it’s important to consider where that data came from as well.
There are several approaches to protecting a users’ personal information. Some tools allow you to build machine learning models that guarantee differential privacy by adding random noise to the dataset. Other researchers are investigating whether machine learning can be effective on data that is already encrypted.
Regularly assessing your company’s website and networks with a vulnerability scanner can help identify known security holes. Many organizations also hire outside security consultants (such as penetration testers) to identify and fix the vulnerabilities that remain. Keeping your devices and information secure is a process that never really ends, of course, and when managing a large volume of security-related data, it can be very helpful to use software to keep track of these risks.
When identifying, managing, and mitigating the vast matrix of risks that today’s companies encounter, many risk management professionals turn to software that helps them keep track of risks, prioritize their mitigation, and track action plans and due dates.
Resolver helps companies track risks, implement controls, and monitor remediation efforts.