Discover more about this emerging risk as we know it now
The Number of Cyberattacks Continues to Escalate
The frequency, severity, and resulting cost of third-party vendor- and supplier-based cyberattacks against higher educational institutions are at an all-time high and are once again the most common attack vector according to Coalition’s 2023 Cyber Claims Report Mid-Year Update. This trend emerges in tandem with the advent and adoption of large language models (LLM) from 2022-2023, thus creating a need and opportunity for chief risk officers and risk managers to implement research-backed cybersecurity controls to protect their campuses.
Effective Cybersecurity Controls
Higher Ed Study’s 2023 Higher Education Survey of Emerging Trends in Procurement & Cyber Security found that the two most effective cybersecurity controls to reduce third-party vendor cyber costs as reported by chief financial officers and risk managers from over 111 institutions are:
- Institutions that require minimum levels of cyber insurance to pay for financial losses caused by the vendor generate an average annual cost savings of $1.13 million – compared to those with no requirement, or who are currently evaluating.
- Higher education officials who collect vendor cyber certificates of insurance and cybersecurity documentation report $946,000 lower average vendor cyber-related costs than their peers who do not collect cyber certifications.
At URMIA’s most recent annual conference, Janette de la Rosa Ducut of the University of California, Riverside identified the fact that “acknowledging that AI can be affected by the following risks will assist you in beginning to understand how to build your response to the risk as well as the opportunities”:
- Data Leaks
The Threat LLMs Bring to Campus
While there are many threats, LLMs create an opportunity for officials to embrace the rapidly emerging technology to assist with the collection and analysis of data at an extraordinary level. Building an effective framework is critical to an organization embracing the positive aspects while being truthful about the threat of the unknown.
ChatGPT, BARD, BLOOM, Bison, and Jurassic-1 has begun to creep into our daily lexicon more in the past year. The academic sphere began with students, faculty, and administration looking at the potential of this new technology. Using the power to use AI “to help students cheat” has been defined by the negative framing in our consciousness, but it is only one aspect of using LLMs within higher education. Institutions must look beyond the academic and incorporate total enterprise management of how LLM/AI can be used.
Take the example of the widely accepted HECVAT, which by its very nature of being self-attested, creates the risk of a vendor using AI to complete their HECVAT to score more cybersecure. If they are using AI to answer the HECVAT, could they also be allowing AI to build and deploy malware? The answer to both questions is most certainly yes. We are not implying that the vendors you work with have any negative intent or that this will occur, but giving any third-party vendor access to your system information is inherently a risk, albeit a necessary one, for the conduct of business. According to The 2023 IBM Cost of a Data Breach Report, the average cost of a data breach for US organizations in 2022 was $9.4M. Further to this staggering amount is the fact that 98% of organizations have had integrations with at least one third-party vendor that has been breached in the last two years.1 Dollar figures like this makes it imperative that an institution understands the potential loss your vendors could bring to your doorstep.
A recent survey conducted by Higher Ed Study2 showed that 65% of the respondents have seen growth in the number of vendors reporting a cyberattack. If a school is using the HECVAT as its risk management tool, the institution is left open to accepting the risk of a vendor using AI to complete the assessment to the acceptable standards of the institution. Application of fines and penalties under the Gramm-Leach-Bliley Act (GLBA) now applies to Title IV schools which is yet another layer of risk when vetting the vendors doing business on campus. Even adding the additional requirement of SOC 2 verification cannot deter the risk exposure unless schools embrace the technology and use it to their advantage.
Steps in the Right Direction to Implement
The hard fact is that there isn’t currently a reliable method to detect if text or programming is written by AI or by a fallible human. Basic risk management rules can apply, however. Examining a vendor’s LLM cyber risk, improving your knowledge of your vendors’ cyber hygiene, and requiring them to transfer their own cyber risk to third-party guarantors are three immediate steps you can implement to reduce your exposure and can also show your own cyber/data security insurer that you are addressing the issue.
A Positive Aspect of LLMs/AI
On the opposite side of the risk exposure, AI can be used to build a more robust risk management program. Being able to assemble, analyze, and report on massive amounts of data is an effective way to harness the power of this new technology.
Understanding the Tools is Key
The emerging area of artificial intelligence and LLMs will only continue to grow as the technology continues to develop. Understanding the capabilities of these tools is the best way to manage your institution’s response.
Want to Learn More?
1 The 2023 IBM Cost of a Data Breach Report, 2023 Verizon’s Data Breach Investigation Report
2 Higher Ed Study, August 2023
URMIA's 2023 Annual Conference registrants can review the slide deck from a companion presentation "ChatGPT Does More Than Write a Term Paper"