This AI Ecosystem Report, featuring CISO Chris Hughes, an Acceleration Economy practitioner analyst, looks at Databricks’ AI Security Framework.
Highlights
00:09 — Databricks’ AI Security Framework starts off covering AI and machine learning (ML) model types: predictive ML models like PyTorch and Hugging Face; state-of-the-art open models like Llama, and external models of third-party services like OpenAI’s ChatGPT and Anthropic.
01:13 — The framework covers four system stages: data operations, model operations, model deployment and serving, and operations and platform. First: data operations. Risks include insufficient access controls, missing data classifications, or poor data quality. Next, model operations: Risks include model drift, ML supply chain vulnerabilities, and model theft.
02:30 — Next up is model deployment and serving. This includes components like model serving inference requests or responses. Some of the risks include prompt injection, model breakout, and output manipulation.
The AI Ecosystem Q1 2024 Report compiles the innovations, funding, and products highlighted in AI Ecosystem Reports from the first quarter of 2024. Download now for perspectives on the companies, investments, innovations, and solutions shaping the future of AI.
03:09 — The last system stage is called operations and platform. Some of the risks here will look very familiar because they’re broader cybersecurity risks. These include a lack of enforcement and repeatable standards as well as a lack of vulnerability management, compliance, and incident response.
04:02 — The Databricks platform, and how it addresses these risks, is covered in great detail, using specific examples from some of its customers.
Ask Cloud Wars AI Agent about this analysis