Uncategorized

Information Security And Machine Learning






Becky Goza, CISSP, Senior Manager of Information Security Information Technology, Love\’s Travel Stops & Country Stores

Becky Goza, CISSP, Senior Manager of Information Security Information Technology, Love\’s Travel Stops & Country Stores

Becky Goza, CISSP, Senior Manager of Information Security Information Technology, Love\’s Travel Stops & Country Stores

Given the growing importance of artificial intelligence (AI) capabilities and tools to the retail industry, and growing interest among policymakers in legislation and regulation that could affect retailers’ ability to use these tools, the National Retail Federation (NRF) recently established a new AI Working Group to facilitate policy and stakeholder engagement on AI issues and the development of practices and guidelines for the use of AI within retail.

Specifically, the NRF AI Working Group will:

•Provide input into US and international policy proposals that may impact the ability of retailers to use AI in support of their business objectives,

•Work with NRF to develop common principles and guidelines for the retail use of AI that can inform policy engagement and support internal governance of AI activities,

•Facilitate the exchange of information among retailers with respect to how they are using AI; and

•Help to educate external stakeholders (including policymakers and the news media) on how retailers are using AI tools and capabilities.

The NRF intends to resource a cross-functional group of participants from among retail member companies on the AI working group, including technology leaders and experts with responsibility for AI and data science initiatives, key business leaders who are major users of AI, and individuals in policy, legal and risk management roles that have responsibility for AI governance. The inaugural meeting will occur in the second week of June 2023.

As the head of information security, participation seems appropriate if not mandatory for competitive advantage. Most Chief Information Security Officers are discussing security AI use cases that help with resource and budget constraints, and/or set their information security functions apart from others on this strategic topic.

 ​Machine learning has been in use by many organizations for a long time. Being a “subset” of AI with extreme limitations, there is probably the most to claim in this spac 

At the same time, one must question how much of the generative artificial intelligence fanfare going around now is attributable to people picking up on the AI discussion, coupled with the increased AI investment by several key vendors, to re-inject/invent older machine learning techniques.

Machine learning has been in use by many organizations for a long time. Being a “subset” of AI with extreme limitations, there is probably the most to claim in this space. Almost every security vendor claims to provide machine learning to some degree. A few of the information security machine learning use cases offered by Splunk include the following:

•Detect Outliers in Number of Logins (vs. Predicted Value)

•Predict the Presence of Malware

•Predict the Presence of Vulnerabilities

•Predict VPN Usage

•Predict the Presence of Vulnerabilities

In addition to these brief examples, AI can be used for policy development as well as risk assumption documentation.

Where are the Information Security challenges related to AI?

• How do we resource AI/which use cases should be implemented? Manyinformation security teams have not even fully exploited the machine learning use cases that have been around for years and now those teams are anticipating being asked to enable business for AI initiatives while at the same time protecting the organization by placing guardrails over proposed use cases exploited by business. This feels very much like security awareness 2.0 which places a high burden on information technology and information security to have visibility over user activity associated with AI.

• Does an AI strategy violate data classification protocols? Many companies may have to rethink their data classification protocols or at the very least consider this crucial area when validating AI use cases.

• Can or should Information Security drive AI platform use toward tools that provide stronger content visibility and control? For example, should a Microsoft shop require the use of copilot for all AI use cases?

• What obligations rest with information security beyond guardrail development, visibility gap assessments, and data leakage controls? The increased use of AI will be an ongoing responsibility for CISOs. How can AI technology be used effectively and responsibly? How can the risks be mitigated internally and externally? The answers to those questions, just like the technology they concern, will continue to evolve.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *