Sorry, you need to enable JavaScript to visit this website.

Accelerating Generative AI

David McIntyre

Jan 2, 2024

title of post
Workloads using generative artificial intelligence trained on large language models are frequently throttled by insufficient resources (e.g., memory, storage, compute or network dataflow bottlenecks). If not identified and addressed, these dataflow bottlenecks can constrain Gen AI application performance well below optimal levels. Given the compelling uses across natural language processing (NLP), video analytics, document resource development, image processing, image generation, and text generation, being able to run these workloads efficiently has become critical to many IT and industry segments. The resources that contribute to generative AI performance and efficiency include CPUs, DPUs, GPUs, FPGAs, plus memory and storage controllers. On January 24, 2024, the SNIA Networking Storage Forum (NSF) is convening a panel of experts for a discussion on how to tackle Gen AI challenges at our live webinar, “Accelerating Generative AI – Options for Conquering the Dataflow Bottlenecks,” where a broad cross-section of industry veterans will provide insight into the following:
  • Defining the Gen AI dataflow bottlenecks
  • Tools and methods for identifying acceleration options
  • Matchmaking the right xPU solution to the target Gen AI workload(s)
  • Optimizing the network to support acceleration options
  • Moving data closer to processing, or processing closer to data
  • The role of the software stack in determining Gen AI performance
This is a session you don’t want to miss! Register today to save your spot. The post Accelerating Generative AI first appeared on SNIA on Network Storage.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Namespace Management Q&A

David Slik

Dec 12, 2023

title of post
The SNIA Cloud Storage Technologies Initiative (CSTI) recently hosted a live webinar, “Simplified Namespace Management – The Open Standards Way,” where David Slik, Chair of the SNIA Cloud Storage Technical Work Group (TWG) provided a fascinating overview of how to tackle the complexities of dynamic namespaces. If you missed the live webinar, you can view it on-demand and access a copy of the webinar slides at the SNIA Educational Library. Attendees at the live event asked several interesting questions. Here are answers to them all. Q. How are the queues assigned to individual namespaces? How many queues are assigned for a particular namespace, can we customize it and if so, how? What is the difference between normal namespace and SR-IOV enabled namespace? Can you please explain sets domain and endurance group? A. For NVMe® namespace management, the Cloud Storage TWG recommends the use of the SNIA Swordfish® Specification. The SNIA Cloud Data Management Interface (CDMI™) is designed to provide a cloud abstraction that hides the complexities and details of the underlying storage implementation and infrastructure from the cloud user. Hiding storage implementation and infrastructure complexities are a key part of what makes a cloud a cloud, in that: a) knowledge and specification of the underlying infrastructure should not be required in order to consume storage services (simplification of use), b) the underlying infrastructure will be constantly and dynamically changing, and these changes should not be visible to the end user (to enable operational flexibility for the cloud provider), and, c) the desired quality of service, data protection, and other client-required storage services should be indicated by intent, rather than as a side-effect of the underlying configuration (ideally via declarative metadata). As these are also three key principles of CDMI. This guides us to avoid directly exposing or managing the underlying storage infrastructure. Q. How do the responses behave if the responses are really large - i.e. Can I get just the metadata that might warn me there are 70 billion objects at the top level and I don’t really want to spend the time to get them all before deciding or diving into one of them? A. When obtaining information about data objects and containers, CDMI allows the request to specify which fields should be returned in the JSON response. This is described in sections 8.4.1 and 9.4.1 of the CDMI specification, and uses standard URI query parameters. For example, to get the object name, metadata and number of children for a container, the following request URI would be used: GET /cdmi/2.0.0/MyContainer/?objectName&metadata&childrenrange CDMI also allows range requests for listing a subset of the children of a container. For example, listing the first 200 children of a container would be accomplished by using the following request URI: GET /cdmi/2.0.0/MyContainer/?children=0­-199 There also is a draft extension to CDMI to support recursive children listing and obtaining information about children in a single request, which can dramatically reduce the number of requests required to enumerate a container when information about each child is required. Q. Where can I go to get help using CDMI with my application? A. SNIA provides free access to the CDMI specification on the SNIA website. Extensions to the CDMI standard are also publicly available here.The SNIA Cloud Storage (TWG) provides implementation assistance and discusses extensions and errata to the standard as part of its weekly work group calls. Interested parties are encouraged to join the TWG. Q. If I were not using CDMI, what tools or methods would I need to incorporate to do the same kind of operations? What else is out there?  A. The Cloud Storage TWG does not know of any similar standards for namespace management. In order to manage namespaces without using CDMI, one would need to do the following: a) Define or select an HTTP-based protocol that provides basic request/response semantics and includes authentication. This is provided by all of the cloud providers for their cloud APIs. b) Define or select a set of APIs for enumerating namespaces, for example, the ListBuckets API in AWS S3, and the Azure Files List Directories and Files API in Microsoft Azure. c) Define a set of APIs for listing and specifying how namespaces (files, directories, objects and containers) can be exported or imported. While each of these exists for the major cloud providers, they are unique for each provider and storage type. CDMI provides a common, open and unified way to manage all types of storage namespaces. Q. How does CDMI help address security in my namespace management? A. CDMI provides a number of security functions that assist with namespace management: a) Every object in CDMI, including namespaces, can have an access control list (ACL) that specifies what operations can be performed against that object. This is described in section 17.2 of the CDMI specification. ACLs are based on standard NFSv4 ACLs, and allow metadata modifications (E.g. CDMI exports and CDMI imports) to have separate access control entries (ACEs). b) CDMI objects can have their access control decisions delegated to a customer-provided system via Delegated Access Control (DAC), which can provide finer-grained access control than ACLs where needed, as needed. This allows policies to take into account the specific import and export requests themselves, and to interface with policy enforcement frameworks such as XACML and open source policy engines such as the Open Policy Agent (OPA). c) CDMI allows mapping of user credentials to the user principal and group to be performed by external systems, such as Active Directory. This mapping can be on an object-by-object basis, allowing objects managed by different security domains to co-exist within a single unified namespace.  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Namespace Management Q&A

Michael Hoard

Dec 12, 2023

title of post
The SNIA Cloud Storage Technologies Initiative (CSTI) recently hosted a live webinar, “Simplified Namespace Management – The Open Standards Way,” where David Slik, Chair of the SNIA Cloud Storage Technical Work Group (TWG) provided a fascinating overview of how to tackle the complexities of dynamic namespaces. If you missed the live webinar, you can view it on-demand and access a copy of the webinar slides at the SNIA Educational Library. Attendees at the live event asked several interesting questions. Here are answers to them all. Q. How are the queues assigned to individual namespaces? How many queues are assigned for a particular namespace, can we customize it and if so, how? What is the difference between normal namespace and SR-IOV enabled namespace? Can you please explain sets domain and endurance group? A. For NVMe® namespace management, the Cloud Storage TWG recommends the use of the SNIA Swordfish® Specification. The SNIA Cloud Data Management Interface (CDMI™) is designed to provide a cloud abstraction that hides the complexities and details of the underlying storage implementation and infrastructure from the cloud user. Hiding storage implementation and infrastructure complexities are a key part of what makes a cloud a cloud, in that: a) knowledge and specification of the underlying infrastructure should not be required in order to consume storage services (simplification of use), b) the underlying infrastructure will be constantly and dynamically changing, and these changes should not be visible to the end user (to enable operational flexibility for the cloud provider), and, c) the desired quality of service, data protection, and other client-required storage services should be indicated by intent, rather than as a side-effect of the underlying configuration (ideally via declarative metadata). As these are also three key principles of CDMI. This guides us to avoid directly exposing or managing the underlying storage infrastructure. Q. How do the responses behave if the responses are really large – i.e. Can I get just the metadata that might warn me there are 70 billion objects at the top level and I don’t really want to spend the time to get them all before deciding or diving into one of them? A. When obtaining information about data objects and containers, CDMI allows the request to specify which fields should be returned in the JSON response. This is described in sections 8.4.1 and 9.4.1 of the CDMI specification, and uses standard URI query parameters. For example, to get the object name, metadata and number of children for a container, the following request URI would be used: GET /cdmi/2.0.0/MyContainer/?objectName&metadata&childrenrange CDMI also allows range requests for listing a subset of the children of a container. For example, listing the first 200 children of a container would be accomplished by using the following request URI: GET /cdmi/2.0.0/MyContainer/?children=0­-199 There also is a draft extension to CDMI to support recursive children listing and obtaining information about children in a single request, which can dramatically reduce the number of requests required to enumerate a container when information about each child is required. Q. Where can I go to get help using CDMI with my application? A. SNIA provides free access to the CDMI specification on the SNIA website. Extensions to the CDMI standard are also publicly available here.The SNIA Cloud Storage (TWG) provides implementation assistance and discusses extensions and errata to the standard as part of its weekly work group calls. Interested parties are encouraged to join the TWG. Q. If I were not using CDMI, what tools or methods would I need to incorporate to do the same kind of operations? What else is out there?  A. The Cloud Storage TWG does not know of any similar standards for namespace management. In order to manage namespaces without using CDMI, one would need to do the following: a) Define or select an HTTP-based protocol that provides basic request/response semantics and includes authentication. This is provided by all of the cloud providers for their cloud APIs. b) Define or select a set of APIs for enumerating namespaces, for example, the ListBuckets API in AWS S3, and the Azure Files List Directories and Files API in Microsoft Azure. c) Define a set of APIs for listing and specifying how namespaces (files, directories, objects and containers) can be exported or imported. While each of these exists for the major cloud providers, they are unique for each provider and storage type. CDMI provides a common, open and unified way to manage all types of storage namespaces. Q. How does CDMI help address security in my namespace management? A. CDMI provides a number of security functions that assist with namespace management: a) Every object in CDMI, including namespaces, can have an access control list (ACL) that specifies what operations can be performed against that object. This is described in section 17.2 of the CDMI specification. ACLs are based on standard NFSv4 ACLs, and allow metadata modifications (E.g. CDMI exports and CDMI imports) to have separate access control entries (ACEs). b) CDMI objects can have their access control decisions delegated to a customer-provided system via Delegated Access Control (DAC), which can provide finer-grained access control than ACLs where needed, as needed. This allows policies to take into account the specific import and export requests themselves, and to interface with policy enforcement frameworks such as XACML and open source policy engines such as the Open Policy Agent (OPA). c) CDMI allows mapping of user credentials to the user principal and group to be performed by external systems, such as Active Directory. This mapping can be on an object-by-object basis, allowing objects managed by different security domains to co-exist within a single unified namespace.  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

It’s All About Cloud Object Storage Interoperability

Michael Hoard

Dec 11, 2023

title of post
Object storage has firmly established itself as a cornerstone of modern data centers and cloud infrastructure. Ensuring API compatibility has become crucial for object storage developers who want to benefit from the wide ecosystem of existing applications. However, achieving compatibility can be challenging due to the complexity and variety of the APIs, access control mechanisms, and performance and scalability requirements. The SNIA Cloud Storage Technologies Initiative, together with the SNIA Cloud Storage Technical Work Group, is working to address the issues of cloud object storage complexity and interoperability. We’re kicking off 2024 with two exciting initiatives: 1) a webinar on June 9, 2024, and 2) a Plugfest in September of 2024. Here are the details: Webinar: Navigating the Complexities of Object Storage Compatibility In this webinar, we'll highlight real-world incompatibilities found in various object storage implementations. We'll discuss specific examples of existing discrepancies, such as missing or incorrect response headers, unsupported API calls, and unexpected behavior. We’ll also describe the implications these have on actual client applications. This analysis is based on years of experience with implementation, deployment, and evaluation of a wide range of object storage systems on the market. Attendees will leave with a deeper understanding of the challenges around compatibility and how to address them in their own applications. Register here to join us on January 9, 2024. Plugfest: Cloud Object Storage Plugfest SNIA is planning an open collaborative Cloud Object Storage Plugfest co-located at SNIA Storage Developer Conference (SDC) scheduled for September 2024 to work on improving cross-implementation compatibility for client and/or server implementations of private and public cloud object storage solutions. This endeavor is designed to be an independent, vendor-neutral effort with broad industry support, focused on a variety of solutions, including on-premises and in the cloud. This Plugfest aims to reduce compatibility issues, thus improving customer experience and increasing the adoption rate of object storage solutions. Click here to let us know if you're interested. We hope you will consider participating in both of these initiatives!    

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

It’s All About Cloud Object Storage Interoperability

Michael Hoard

Dec 11, 2023

title of post
Object storage has firmly established itself as a cornerstone of modern data centers and cloud infrastructure. Ensuring API compatibility has become crucial for object storage developers who want to benefit from the wide ecosystem of existing applications. However, achieving compatibility can be challenging due to the complexity and variety of the APIs, access control mechanisms, and performance and scalability requirements. The SNIA Cloud Storage Technologies Initiative, together with the SNIA Cloud Storage Technical Work Group, is working to address the issues of cloud object storage complexity and interoperability. We’re kicking off 2024 with two exciting initiatives: 1) a webinar on June 9, 2024, and 2) a Plugfest in September of 2024. Here are the details: Webinar: Navigating the Complexities of Object Storage Compatibility In this webinar, we’ll highlight real-world incompatibilities found in various object storage implementations. We’ll discuss specific examples of existing discrepancies, such as missing or incorrect response headers, unsupported API calls, and unexpected behavior. We’ll also describe the implications these have on actual client applications. This analysis is based on years of experience with implementation, deployment, and evaluation of a wide range of object storage systems on the market. Attendees will leave with a deeper understanding of the challenges around compatibility and how to address them in their own applications. Register here to join us on January 9, 2024. Plugfest: Cloud Object Storage Plugfest SNIA is planning an open collaborative Cloud Object Storage Plugfest co-located at SNIA Storage Developer Conference (SDC) scheduled for September 2024 to work on improving cross-implementation compatibility for client and/or server implementations of private and public cloud object storage solutions. This endeavor is designed to be an independent, vendor-neutral effort with broad industry support, focused on a variety of solutions, including on-premises and in the cloud. This Plugfest aims to reduce compatibility issues, thus improving customer experience and increasing the adoption rate of object storage solutions. Click here to let us know if you’re interested. We hope you will consider participating in both of these initiatives!    

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Erin Farr

Nov 14, 2023

title of post
At our recent SNIA Cloud Storage Technologies (CSTI) webinar “Why Distributed Edge Data is the Future of AI” our expert speakers, Rita Wouhaybi and Heiko Ludwig, explained what’s new and different about edge data, highlighted use cases and phases of AI at the edge, covered Federated Learning, discussed privacy for edge AI, and provided an overview of the many other challenges and complexities being created by increasingly large AI models and algorithms. It was a fascinating session. If you missed it you can access it on-demand along with a PDF of the slides at the SNIA Educational Library. Our live audience asked several interesting questions. Here are answers from our presenters. Q. With the rise of large language models (LLMs) what role will edge AI play? A. LLMs are very good at predicting events based on previous data, often referred to as next token in LLMs. Many edge use cases are also about prediction of the next event, e.g., a machine is going to go down, predicting an outage, a security breach on the network and so on. One of the challenges of applying LLMs to these use cases is to convert the data (tokens) into text with the right context. Q. After you create an AI model how often do you need to update it? A. That is very dependent on the dataset itself, the use case KPIs, and the techniques used (e.g., network backbone and architecture). It used to be where the data collection cycle is very long to collect data that includes outliers and rare events. We are moving away from this kind of development due to its cost and long time required. Instead, most customers start with a few data points and reiterate by updating their models more often. Such a strategy helps a faster return on the investment since you deploy a model as soon as it is good enough. Also, new techniques in AI such as unsupervised learning or even selective annotation can enable some use cases to get a model that is self-learning or at the least self-adaptable. Q. Deploying AI is costly, what use cases tend to be cost effective? A. Just like any technology, prices drop as scale increases. We are at an inflection point where we will see more use cases become feasible to develop and deploy. But yes, many use cases might not have an ROI. Typically, we recommend to start with use cases that are business critical or have potential of improving yield, quality, or both. Q. Do you have any measurements about energy usage for edge AI? Wondering if there is an ecological argument for edge AI in addition to the others mentioned? A. This is very good question and top of mind for many in the industry.There is not data yet to support sustainability claims, however running AI at the edge can provide more control and further refinement for making tradeoffs in relation to corporate goals including sustainability. Of course, compute at the edge reduces data transfer and the environmental impact of these functions.  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Erin Farr

Nov 14, 2023

title of post
At our recent SNIA Cloud Storage Technologies (CSTI) webinar “Why Distributed Edge Data is the Future of AI” our expert speakers, Rita Wouhaybi and Heiko Ludwig, explained what’s new and different about edge data, highlighted use cases and phases of AI at the edge, covered Federated Learning, discussed privacy for edge AI, and provided an overview of the many other challenges and complexities being created by increasingly large AI models and algorithms. It was a fascinating session. If you missed it you can access it on-demand along with a PDF of the slides at the SNIA Educational Library. Our live audience asked several interesting questions. Here are answers from our presenters. Q. With the rise of large language models (LLMs) what role will edge AI play? A. LLMs are very good at predicting events based on previous data, often referred to as next token in LLMs. Many edge use cases are also about prediction of the next event, e.g., a machine is going to go down, predicting an outage, a security breach on the network and so on. One of the challenges of applying LLMs to these use cases is to convert the data (tokens) into text with the right context. Q. After you create an AI model how often do you need to update it? A. That is very dependent on the dataset itself, the use case KPIs, and the techniques used (e.g., network backbone and architecture). It used to be where the data collection cycle is very long to collect data that includes outliers and rare events. We are moving away from this kind of development due to its cost and long time required. Instead, most customers start with a few data points and reiterate by updating their models more often. Such a strategy helps a faster return on the investment since you deploy a model as soon as it is good enough. Also, new techniques in AI such as unsupervised learning or even selective annotation can enable some use cases to get a model that is self-learning or at the least self-adaptable. Q. Deploying AI is costly, what use cases tend to be cost effective? A. Just like any technology, prices drop as scale increases. We are at an inflection point where we will see more use cases become feasible to develop and deploy. But yes, many use cases might not have an ROI. Typically, we recommend to start with use cases that are business critical or have potential of improving yield, quality, or both. Q. Do you have any measurements about energy usage for edge AI? Wondering if there is an ecological argument for edge AI in addition to the others mentioned? A. This is very good question and top of mind for many in the industry.There is not data yet to support sustainability claims, however running AI at the edge can provide more control and further refinement for making tradeoffs in relation to corporate goals including sustainability. Of course, compute at the edge reduces data transfer and the environmental impact of these functions.  

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Addressing the Hidden Costs of AI

Erik Smith

Nov 9, 2023

title of post
The latest buzz around generative AI ignores the massive costs to run and power the technology. Understanding what the sustainability and cost impacts of AI are and how to effectively address them will be the topic of our next SNIA Networking Storage Forum (NSF) webinar, “Addressing the Hidden Costs of AI.” On December 12, 2023, our SNIA experts will offer insights on the potentially hidden technical and infrastructure costs associated with generative AI. You’ll also learn best practices and potential solutions to be considered as they discuss:
  • Scalability considerations for generative AI in enterprises
  • Significant computational requirements and costs for Large Language Model inferencing
  • Fabric requirements and costs
  • Sustainability impacts due to increased power consumption, heat dissipation, and cooling implications
  • AI infrastructure savings: On-prem vs. Cloud
  • Practical steps to reduce impact, leveraging existing pre-trained models for specific market domains
Register today. Our presenters will be available to answer your questions. The post Addressing the Hidden Costs of AI first appeared on SNIA on Network Storage.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

An Open Standard for Namespace Management

Michael Hoard

Sep 20, 2023

title of post
The days of simple, static, self-contained file systems have long passed. Today, we have complex, dynamic namespaces, mixing block, file, object, key-value, queue, and graph-based resources, accessed via multiple protocols, and distributed across multiple systems and geographic sites. These complexities result in new challenges for simplifying management. There is good news on addressing this issue, and the  SNIA Cloud Storage Technologies Initiative (CSTI) will explain how in our live webinar “Simplified Namespace Management – The Open Standards Way” on October 18, 2023, where David Slik, Chair of the SNIA Cloud Storage Technical Work Group, will demonstrate how the SNIA Cloud Data Management Interface (CDMI™), an open ISO standard (ISO/IEC 17826:2022) for managing data objects and containers, already includes extensive capabilities for simplifying the management of complex namespaces. In this webinar, you’ll learn the benefits of simplifying namespace management in an open standards way, including namespace discovery, introspection, exports, imports and more, discussing:
  • Challenges and limitations with proprietary namespace management
  • Overview of namespaces and industry evolution
  • Lack of portability between platforms
  • Using CDMI for simplified and consistent namespace management
  • Use cases for namespace management
As one of the key architects of CDMI, David will dive into the details, discuss real-world use cases and answer your questions. We hope you’ll join us on October 18th. Register here.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Erin Farr

Sep 14, 2023

title of post
Confidential AI is a new collaborative platform for data and AI teams to work with sensitive data sets and run AI models in a confidential environment. It includes infrastructure, software, and workflow orchestration to create a secure, on-demand work environment that meets organization’s privacy requirements and complies with regulatory mandates. It’s a topic the SNIA Cloud Storage Technologies Initiative (CSTI) covered in depth at our webinar, “The Rise in Confidential AI.” At this webinar, our experts, Parviz Peiravi and Richard Searle provided a deep and insightful look at how this dynamic technology works to ensure data protection and data privacy. Here are their answers to the questions from our webinar audience. Q. Are businesses using Confidential AI today? A. Absolutely, we have seen a big increase in adoption of Confidential AI particularly in industries such as Financial Services, Healthcare and Government, where Confidential AI is helping these organizations enhance risk mitigation, including cybercrime prevention, anti-money laundering, fraud prevention and more. Q: With compute capabilities on the Edge increasing, how do you see Trusted Execution Environments evolving? A. One of the important things about Confidential Computing is although it's a discrete privacy enhancing technology, it's part of the underlying broader, distributed data center compute hardware. However, the Edge is going to be increasingly important as we look ahead to things like 6G communication networks. We see a role for AI at the Edge in terms of things like signal processing and data quality evaluation, particularly in situations where the data is being sourced from different endpoints. Q: Can you elaborate on attestation within a Trusted Execution Environment (TEE)? A. One of the critical things about Confidential Computing is the need for an attested Trusted Execution Environment. In order to have that reassurance of confidentiality and the isolation and integrity guarantees that we spoke about during the webinar, attestation is the foundational truth of Confidential Computing and is absolutely necessary. In every secure implementation of confidential AI, attestation provides the assurance that you're working in that protected memory region, that data and software instructions can be secured in memory, and that the AI workload itself is shielded from the other elements of the computing system. If you're starting with hardware-based technology, then you have the utmost security, removing the majority of actors outside of the boundary of your trust. However, this also creates a level of isolation that you might not want to use for an application that doesn't need this high level of security. You must balance utmost security with your application’s appetite for risk. Q: What is your favorite reference for implementing Confidential Computing that bypasses the OS, BIOS, VMM (Virtual Machine Manager) and uses the root trust certificate? A. It's important to know that there are different implementations of Trusted Execution Environments, and they are very relevant to different types of purposes. For example, there are process-based TEEs that enable a very discrete definition of a TEE and provide the ability to write specific code and protect very sensitive information because of the isolation from things like the hypervisor and virtual machine manager. There are also different technologies available now that have a virtualization basis and include a guest operating system within their trusted computing base, but they provide greater flexibility in terms of implementation, so you might want to use that when you have a larger application or a more complex deployment. The Confidential Computing Consortium, which is part of The Linux Foundation, is also a good resource to keep up with Confidential AI guidance. Q: Can you please give us a picture of the upcoming standards for strengthening security? Do you believe that European Union’s AI Act (EU AI Act) is going in the right direction and that it will have a positive impact on the industry? A. That’s a good question. The draft EU AI Act was approved in June 2023 by the European Parliament, but the UN Security Council has also put out a call for international regulation in the same way that we have treaties and conventions. We think what we're going to see is different nation states taking discrete approaches. The UK has taken an open approach to AI regulation in order to stimulate innovation. The EU already has a very prescriptive data protection regulation method, and the EU AI Act takes a similar approach. It's quite prescriptive and designed to complement data privacy regulations that already exist. For a clear overview of the EU’s groundbreaking AI legislation, refer to the EU AI ACT summary. It breaks down the key obligations, compliance responsibilities, and the broader impact on various AI applications. Q. Where do you think some of the biggest data privacy issues are within generative AI? A. There's quite a lot of debate already about how these massive generative AI systems have used data scraped from the web, whether things like copyright provisions have been acknowledged, and whether data privacy in imagery from social media has been respected. At an international level, it's going to be interesting to see whether people can agree on a cohesive framework to regulate AI and to see if different countries can agree. There’s also the issue of the time required to develop legislation being superseded by technological developments. We saw ChatGPT to be very disruptive last year. There are also ethical considerations around this topic which the SNIA CSTI covered in a webinar “The Ethics of Artificial Intelligence.” Q. Are you optimistic that regulators can come to an agreement on generative AI? A. In the last four or five years, regulators have become more open to working with financial institutions to better understand the impact of adopting new technologies such as AI and generative AI. This collaboration among regulators with those in the financial sector is creating momentum. Regulators such as the Monetary Authority of Singapore are leading this strategy, actively working with vendors to understand the technology application within financial services and how to guide the rest of the banking industry.

Olivia Rhye

Product Manager, SNIA

Find a similar article by tags

Leave a Reply

Comments

Name

Email Adress

Website

Save my name, email, and website in this browser for the next time I comment.

Subscribe to